<<

Automation, Collaboration, A C E S & E-Services

Florin Gheorghe Filip Constantin-Bălă Zamfirescu Cristian Ciurea Computer‐ Supported Collaborative Decision‐ Making Automation, Collaboration, & E-Services

Volume 4

Series editor Shimon Y. Nof, Purdue University, West Lafayette, USA e-mail: [email protected] About this Series

The Automation, Collaboration, & E-Services series (ACES) publishes new devel- opments and advances in the fields of Automation, collaboration and e-services; rapidly and informally but with a high quality. It captures the scientific and engineer- ing theories and techniques addressing challenges of the megatrends of automation, and collaboration. These trends, defining the scope of the ACES Series, are evident with wireless communication, Internetworking, multi-agent systems, sensor networks, and social robotics – all enabled by collaborative e-Services. Within the scope of the series are monographs, lecture notes, selected contributions from spe- cialized conferences and workshops.

More information about this series at http://www.springer.com/series/8393 Florin Gheorghe Filip • Constantin-Bălă Zamfirescu Cristian Ciurea

Computer-Supported Collaborative Decision-Making

123 Florin Gheorghe Filip Cristian Ciurea Information Science and Technology Department of Economic Informatics Section, INCE and BAR and Cybernetics The Romanian Academy ASE Bucharest—The Bucharest University Bucharest of Economic Studies Romania Bucharest Romania Constantin-Bălă Zamfirescu Faculty of Engineering, Department of Computer Science and Automatic Control Lucian Blaga University of Sibiu Sibiu Romania

ISSN 2193-472X ISSN 2193-4738 (electronic) Automation, Collaboration, & E-Services ISBN 978-3-319-47219-5 ISBN 978-3-319-47221-8 (eBook) DOI 10.1007/978-3-319-47221-8

Library of Congress Control Number: 2016953309

© Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Foreword

Decisions, we all have to make them. They influence our world. We endeavor to make good decisions, since we know they always bear consequences. Poor deci- sions often lead to negative results; sometimes, even good decisions may not guarantee positive results. So we try to learn how to improve our decisions to increase gains and reduce risks. Evidently, humans were trying the same for a very long time. For instance, from the bible we can learn that: • “Listen to advice and accept instruction, and in the end you will be wise”— Proverbs, 19:20 (*950 BC) • Two heads are better than one, originating from “Therefore two are better than one, for they may enjoy better profit of their labor”—Ecclesiastes, 4:9 (*940 BC) There are a number of obvious advantages when multiple participants collabo- rate in deliberating and reaching a decision. One may think: “Of course, they can all be happy if things go well; they can blame each other if things go wrong”. But while sharing responsibility can be an advantage, there are significant other merits. Multiple humans can debate and integrate diverse experiences, opinions, and views, and negotiate over risks, alternative plans, and even conflicting positions. Sensors and robots can similarly integrate and fuse multiple types of readings, locations, perspectives, and computational intelligence. They—groups of people, robots, and sensors—can negotiate, back each other up, help each other to overcome delays or shortage of knowledge, tools, and energy, and finally make timely and “best” decisions. Such “best” decisions imply benevolent group participants, able to consider all available information and logic, balance and settle their respective needs, priorities, constraints, risks, and objectives. That seems truly smart. Knowing all of that, throughout history people have indeed used the instruments of meetings, committees, teams, government bodies, and other groupings to follow the wisdom of multiple brains in their effort to make better decisions. What is new? To answer this question, Academician F.G. Filip and his coauthors, Profs. C.-B. Zamfirescu and C. Ciurea, combine their accomplished expertise in the theory and

v vi Foreword practice of decision systems in a creative way. They begin by explaining the relation and mutual roles of collaboration and decision-making. With the advent of computers, communication, and cyber, they describe how and why DSS, decision support systems, could evolve and progress. With the advent of computer-supported collaboration and cyber-collaborative systems, they evaluate various collaborative methods and the role of collaboration engineering. What is new and inspiring, as presented in this book for students, developers, practitioners and researchers, is that multiple decision-makers can now collaborate with each other and beyond. They (we) can reach significantly wiser decisions in even more complex situations, by collaborating more effectively and with higher levels of collaborative intelligence. These new capabilities are visionary, yet already embedded and enabled by essential cyber technologies: From data, mobile, web and social networking technologies, to advanced cyber-collaborative support systems and useful applications. The authors are contributing to the ACES book series, and through it to enriching the science and knowledge of computer- and cyber-supported collabo- ration for better decisions: For better society, better well-being, and better understanding.

September 2016 Shimon Y. Nof Professor and PRISM Center Director Purdue University West Lafayette IN USA Preface

This is a book about how management and control decisions are made by persons who collaborate and possibly use the support of an information system. In the book, we adopt the following definitions: The decision is the result of human conscious activities aiming at choosing a course of action for attaining a certain objective (or a set of objectives). It normally implies allocating the necessary resources and it is the result of processing infor- mation and knowledge that is performed by a person (or a group of persons), who is empowered to make the choice and is accountable for the quality of the solution adopted to solve a particular problem or situation. The act of collaboration implies that several entities who work together and share responsibilities to jointly plan, implement and evaluate a program of activities to achieve the common goal to jointly generate values. A collaborative group is made up of several members, who are assigned or decide by themselves to jointly attain a set of common goals by carrying out a set of activities and using a number of procedures and techniques. A decision support system (DSS) is an anthropocentric and evolving information system, which is meant to implement the functions of a human support team that would otherwise be necessary to help the decision-maker to overcome his/her limits and constraints that he/she may encounter when trying to solve complex and complicated decision problems that count.

The Context of Writing the Book

In 2007, the second edition of the “Decision Support System” by F.G. Filip was published in Romanian by the Technical Publishers, Bucharest. It contained a presentation of a DSS concepts illustrated by Dispatcher®, a practical system meant to support the production planning and control decision-making in the milieu of continuous process industries. In the final section of the book, the author made a

vii viii Preface promise to come back together with several of his colleagues with new books about particular classes of DSS, including group decision support systems (GDS). C.B. Zamfirescu received his Ph.D. in 2005 from the Technical University “Politehnica” of Bucharest with a thesis entitled “Anthropocentric Group Decision Support Systems”. The document contained several original results, such as agent-based social simulation for group decisions, swarming models of computa- tion to automate the facilitation of group decisions, goal-oriented dialog system with inconsistent knowledge bases and so on. A part of these results, together with newer ones, is contained in the book we are proposing to our readers. In 2011, Cristian Ciurea received his Ph.D. from the Academy of Economic Studies of Bucharest with a thesis entitled “Collaborative Systems Metrics”. Several ideas, such as applications in economy of collaborative systems, quality charac- teristics of collaborative systems, collaborative virtual organizations, collaborative production processes, were contained in the thesis and a part of them is included in the present book. The last decades saw an impetuous advance in the information and communi- cation technologies and in associated concepts. The new versions of the Internet protocol, social networks, mobile and cloud computing, and business intelligence and analytics have had a serious impact, not only on the information system design, but also on the way the business has been conducted and decisions have been made. Collaborative activities carried out by various entities, such as enterprises, people, machines, computers and so on, are ever more numerous and visible. Two very recent books (Nof et al. 2015; Nunamaker et al. 2015) present the new achieve- ments in collaborative systems under the influence of and enabled by the new information and communication technologies (I&CT) and are a good example to follow. The new series Automation, Collaboration & E-Services (ACES) of Springer, which is meant “to capture” the scientific and engineering theories and technologies addressing challenges of the megatrends of automation and collabo- ration, was viewed by the authors as a valuable means to make available an up-to-date view of computer-supported collaborative decision-making to various readers. The invitation sent by Prof. S.Y. Nof, the ACES series editor, came in time and stimulated the authors to propose the current book.

The Book Goal

The book is intended to present a balanced view of the computer-supported col- laborative decision-making domain to include both well-established concepts and a selection of new results in the domains of methods and key technologies. It is meant to answer several questions, such as: (a) “How are evolving the business models towards the ever more collaborative schemes?”; (b) “What is the role of the decision-maker in the new business and technological context?” (c) “What are the basic characteristic attributes and trends in the domain of decision-supporting information systems?”; (d) “Which are the basic methods to aggregate the Preface ix individual preferences of the people who collaborate in decision-making activi- ties?”; (e) “How far can automation go?”; (f) “What is the impact of modern information and communication technologies on the design and usage of decision-supporting information systems meant for groups of people?”. The book is intended to be a reference text for researchers, analysts and system developers in the field of information systems, which are meant to be used in supporting management and control decision-making. The managers interested in getting competitive advance on the market using modern methods and technologies can also benefit from studying the material. The book is also recommended as a textbook for graduated students in automatic control, computer science, informat- ics, industrial engineering, management, and business administration.

Material Organization

The book is composed of five chapters as follows: Chapter 1, entitled “Collaboration and Decision-making in Context”, is meant to set the stage for the following chapters by describing the business context and introducing the terminology to be used throughout the text. We review the main concepts concerning management and control schemes, the mission and allocated functions of the human agent in the loop, and basic aspects of multi-attribute/ multi-participant decision-making. Chapter 2, entitled “Decision Support Systems”, reviews basic concepts of the decision support systems domain. Several topics are addressed in sequence, such as decisions and decision-makers, mainly particular subclasses of the general DSS class, DSS construction, so that the reader could get a view of DSS “physiology” (functions and usage), “anatomy” (composition), and “ontogeny” (design and construction). A particular attention is paid to group/multi-participant DSS and intelligent DSS. Chapter 3, entitled “Collaborative Activities and Methods”, is meant to review the most important methods used in collaborative human activities with a particular emphasis on group decision-making. To set the stage for method presentations, the chapter starts by explaining several concepts, such as e-collaboration, collaborative groups, crowd participation, and reviews the development history of computer-supported collaboration. The chapter continues by reviewing the most frequently used voting rules defined in social choice theory and their extensions employed in knowledge-driven DSS. The engineering issues of deploying computer-supported collaborative activities in real working environments are pre- sented in the final section of the chapter. Chapter 4, entitled “Essential Enabling Technologies”, contains a review of the major key technologies which have significantly influenced the design and usage of infor- mation systems. Business intelligence and analytics, Web technology, social networks, mobile and cloud computing are described in “parallel” sections with similar organi- zation. Their impact on computer-aided decision-making is highlighted. Biometric x Preface systems and serious digital games and their possible usage to ensure the authorized access and facilitate users’ training are presented in the final sections of the chapter. Chapter 5, entitled “Application Cases”, contains three sections addressing: (a) the usage of biology inspired models to simulate the facilitator activity, (b) an application of big data in labor market analysis, and (c) an integrated and evolving information platform used in various collaborative decision-making cases. Each chapter contains at the end a section with Notes and comments that highlights the main ideas presented and guides the reader through the most important references, if she/he wants to go deeper in the field. In each chapter, the authors have presented a selection of relevant standards. The book is organized in accordance with a quasi-sequential-parallel scheme which reflects the recommended order of chapter studying (see figure below).

Throughout the material, there are several “pointers” to sections where the concepts and ideas just introduced are addressed in more details. Consequently, the reader can design his/her study order in accordance with his interests and curiosity.

Acknowledgments

The material included in this book is a result of the studies and researches carried out by the authors at several institutions: the National Institute for Informatics (ICI), the Centre for IT and Decision-Making of the National Institute for Economic Research (INCE) of the Romanian Academy, Ecolle Centrale de Lille, the “Lucian Blaga” University of Sibiu, the Katholieke Universiteit Leuven, the German Research Center for Artificial Intelligence, and the Academy of Economic Studies (ASE) of Bucharest. The authors are grateful to their colleagues who provided them useful opinions and ideas on the book topics: Profs. S.Y. Nof, P. Borne, I. Dumitrache, D. Popescu, I. Dzitac, B. Bărbat, I. Ivan, H. Van Brussel, Dr. P. Valckenaers, Profs. C. Boja, P. Pocatilu, Luminiţa Duta, M. Cioca, Dr. A.M. Suduc, and Dr. M. Bizoi. The feedbacks of the students who attended the master courses on the “Decision Support Systems”, at the Technical University “Politehnica” of Bucharest, “Multi Agent Systems”, “Human Computer Interaction”, and “Industrial Informatics” at the “Lucian Blaga” University of Sibiu, and “Mobile Applications Security” at Bucharest University of Economic Studies are also appreciated. Preface xi

Several scholars contributed with useful advices and/or sent valuable documenta- tion. An inherently incomplete list includes: Profs. D.J. Power, L. Camarinha-Matos, G. Tecuci, Yong Shi, Gang Kou, G. Metakides, Pascale Zaraté, K.E. Zavadskas, Ulle Endriss, A. Kaklauskas, Dan Tufis, S.M. Gupta, H. Panetto, R.E. Precup, Dan Ștefănoiu, N. Paraschiv, H. Dragomirescu, Dr. G. Neagu, Profs. G.H. Tzeng, L. Monostori, Shaofeng Liu, Dr. Angela Ioniţǎ, Profs. L.A. Gomes, M. Mora, and Dr. D.A. Donciulescu. Prof. I. Buciu, Dr. C. Brândaş, Dr. D. Pânzaru, Dr. C. Cândea, and Dr. I.A. Ştefan accepted to contribute with specialized sections to the book. They should receive the authors’ sincere thanks. Many ideas and research results contained in the book were presented in journal and conference papers. The authors are grateful to Profs. P. Borne, Y. Shi, G. Kou, I. Dzitac, M.J. Manolescu, A. Ortiz, M. Tang, M. Brdys, G. Lefranc, Carolina Lagos, Ngoc Thanh Nguyen, C. Bădicǎ, and Felisa Cordova, who offered us the opportunity to hold talks and lectures at various conferences and institutions. Additionally, some of the latest achievements in social choice theory were presented during many workshops supported by COST Action IC1205 on Computational Social Choice. The discussions held at the ITQM (Information Technology and Quantitative Management), IFAC CC (Coordinating Committee) 5, ICCCC (International Conference on Computers, Communications and Control) and IE (Informatica Economică) series of conferences have contributed to clarify many ideas in the book domain. Special thanks are due to Prof. S.Y. Nof, who encouraged our researches in the fields of large-scale complex systems and collaborative systems and who invited us to submit the manuscript to Springer writing the foreword of the book. Thanks are due to Ph.D. Lorenţa Popescu and Ms. Cosmina Almăşan, who patiently read the manuscript and had an essential contribution to improving the English of the text. Finally, the contribution of Mr. Holger Schaepe, Editorial Assistant, Ramamoorthy Rajangam, Project coordinator and Henry Pravin Arokiaraj, Production Editor, from Springer, DE, to prepare and produce this book, is acknowledged and appreciated.

Bucharest, Romania Florin Gheorghe Filip Sibiu, Romania Constantin-Bălă Zamfirescu Bucharest, Romania Cristian Ciurea July 2016

References

Nof SY, Ceroni J, Jeong W, Moghaddam M (2015) Revolutionizing Collaboration through e-Work, e-Business, and e-Service, vol 2. Springer. Nunamaker JF Jr, Briggs RO, Romano NCR Jr (2015) Collaboration Systems: Concept, Value, and Use. Routledge. Contents

1 Collaboration and Decision-Making in Context ...... 1 1.1 The Evolving Controlled Object ...... 1 1.1.1 The Enterprise as a Large-Scale System ...... 2 1.1.2 Adopted Terminology ...... 5 1.1.3 Classification...... 5 1.2 From Hierarchical Control to Cooperative Schemes ...... 6 1.2.1 Hierarchical Systems Approach...... 7 1.2.2 Towards Cooperative Schemes ...... 8 1.3 The Role of the Human in the System ...... 10 1.3.1 The Human in the Loop ...... 11 1.3.2 Allocation of Functions and Levels of Automation ...... 12 1.3.3 The Need for Effective Computer Supported Collaboration...... 16 1.4 Towards Anthropocentric Information Systems ...... 17 1.4.1 Several Questions and Answers ...... 17 1.4.2 Attributes ...... 18 1.5 Decisions and Decision Units ...... 19 1.5.1 Definitions...... 19 1.5.2 Possible Approaches ...... 20 1.5.3 Multicriteria Decision Models ...... 22 1.6 Notes and Comments ...... 25 References...... 26 2 Decision Support Systems ...... 31 2.1 Decisions and Decision-Makers ...... 32 2.1.1 Herbert Simon’s Process Model of Decision-Making ..... 32 2.1.2 Limits and Constrains of Human Decision Makers ...... 34 2.1.3 Classes of Decision-Makers ...... 34 2.2 DSS Basic Concepts...... 35 2.2.1 Definition and Characteristic Features ...... 36

xiii xiv Contents

2.2.2 DSS Technology ...... 37 2.2.3 A Special Case: Real-Time DSS for Control Applications ...... 40 2.3 DSS Subclasses ...... 42 2.3.1 Classification 1 (with Respect to Decision Maker Type) ...... 43 2.3.2 Classification 2 (with Respect to Type of Support) ...... 44 2.3.3 Classification 3 (with Respect to the Technological Orientation) ...... 44 2.3.4 Special Cases ...... 45 2.4 DSS Construction ...... 51 2.4.1 Influence Factors ...... 51 2.4.2 Design and Implementation Approaches ...... 53 2.4.3 Selection of the I&CT Tools...... 57 2.4.4 Integration and Evaluation ...... 60 2.5 Notes and Comments ...... 62 References...... 63 3 Collaborative Activities and Methods ...... 71 3.1 Computer Supported Collaboration...... 71 3.1.1 Collaboration, e-Collaboration and Collaborative Groups ...... 71 3.1.2 Brief History of e-Collaboration ...... 74 3.1.3 More About Group Support Systems ...... 77 3.1.4 Crowdsourcing—A Special Case of Collaboration ...... 79 3.2 Fundamentals of Social Choice...... 80 3.2.1 Aggregating Individual Preferences...... 81 3.2.2 Voting Mechanisms ...... 83 3.2.3 Axioms and Paradoxes ...... 87 3.2.4 Implications for Group Support Systems...... 90 3.3 Further Extensions from Social Choice Theory to Group Decisions ...... 92 3.3.1 Judgment Aggregation ...... 93 3.3.2 Resource Allocation ...... 97 3.3.3 Group Argumentation ...... 102 3.4 Collaboration Engineering ...... 104 3.4.1 Basic Collaboration Patterns ...... 105 3.4.2 Collaborative Decision-Making Process ...... 108 3.4.3 Deployment of Collaboration Models ...... 110 3.5 Notes and Comments ...... 113 References...... 114 Contents xv

4 Essential Enabling Technologies...... 121 4.1 Modern Data Technologies...... 122 4.1.1 Data-Driven Decision Support Systems...... 122 4.1.2 Big Data ...... 124 4.1.3 Business Intelligence and Analytics ...... 125 4.1.4 Towards a Data Science ...... 127 4.2 Web Technologies ...... 129 4.2.1 The Concept ...... 129 4.2.2 Particular Subclasses ...... 129 4.2.3 Usages and Relevance to Collaborative Decision-Making...... 131 4.2.4 Standards ...... 134 4.3 Social Networks ...... 135 4.3.1 The Concept ...... 135 4.3.2 Particular Subclasses ...... 136 4.3.3 Usages and the Relevance to Collaborative Decision-Making...... 138 4.3.4 Standards ...... 140 4.4 Mobile Computing ...... 141 4.4.1 The Concept ...... 141 4.4.2 Classes and Subclasses ...... 142 4.4.3 Usage and Relevance to Collaborative Decision-Making...... 146 4.4.4 Mobile Cloud Computing ...... 149 4.5 Biometric Technologies for Virtual Electronic Meetings (By I. Buciu)...... 151 4.5.1 The Concept ...... 152 4.5.2 Particular Subclasses ...... 154 4.5.3 Mobile and Web-Based Technologies ...... 158 4.5.4 Possible Attacks ...... 159 4.5.5 Attributes of Effective Technologies ...... 160 4.5.6 Standards ...... 161 4.6 Game Technology as a Tool for Collaborative Decision-Making (By Ioana Andreea Ștefan) ...... 162 4.6.1 The Game Mechanics ...... 163 4.6.2 Software Tools ...... 164 4.7 Notes and Comments ...... 166 References...... 167 5 Applications ...... 177 5.1 A Practical Swarming Model for Facilitating Collaborative Decisions ...... 177 5.1.1 The Concept of Stigmergic Coordination ...... 178 5.1.2 The Computational Model and Its Implementation ...... 180 xvi Contents

5.1.3 Some Experimental Results...... 183 5.1.4 Discussion and Concluding Remarks ...... 187 5.2 An Application of Data Mining to Decisions in Labour Market (By Claudiu Brândaş and Ciprian Pânzaru) ...... 188 5.2.1 A Framework of a Labour Market Decision Support System (LM-DSS) ...... 189 5.2.2 Example ...... 190 5.2.3 Comments ...... 193 5.3 iDecisionSupport Platform (By Ciprian Cândea) ...... 194 5.3.1 The Concept ...... 195 5.3.2 Current Version...... 198 5.3.3 The Evolution ...... 206 References...... 207 Index ...... 213 Chapter 1 Collaboration and Decision-Making in Context

The goal of this chapter is to provide a historical account of the evolutions in the domain of the book and to set the stage for the concepts and solutions to be presented in the following chapters, including the introduction of the terminology adopted to be used throughout this text. Consequently, we aim at providing the answers to a series of questions, such as: (a) “How the organizations have been evolving over the last decades?”, (b) “Which have been the corresponding trends of the management and control schemes?”, (c) “How management and control func- tions are allocated to human and automation equipment?”, (d) “Which are the desirable properties of the information processing tools meant to support the human agent to carry out his/her tasks?” The remaining part of this chapter is organized as follows. In the first section, we make a review of the ever increasing complexity of the controlled objects over the last four decades and describe the characteristic features of collaborative networks. Next section contains a historical account of the technology and business-driven evolutions of management and control schemes from hierarchical multilevel control to more cooperative solutions. The third section addresses the role of human agent in management and control tasks. In the fourth section, we present the requirements for the human-centered information tools which are meant to support the activities of the person[s] in charge to make deci- sions in management and control tasks. A brief review of multi-criteria decision models is made in the fifth section and an interpretation of criteria from a multi-participant decision-maker’s perspective is provided in the fifth, final section.

1.1 The Evolving Controlled Object

Knowledge workers and process operators make various management and control decisions in their area of influence and responsibility. Decision-making activities and styles should be adapted to the specific context of the organizations the decision

© Springer International Publishing AG 2017 1 F.G. Filip et al., Computer-Supported Collaborative Decision-Making, Automation, Collaboration, & E-Services 4, DOI 10.1007/978-3-319-47221-8_1 2 1 Collaboration and Decision-Making in Context unit is placed in. In this section, we will present a survey of the business and technology-driven evolutions of the controlled objects (organizations and techno- logical processes) with particular emphasis on the manufacturing enterprise.

1.1.1 The Enterprise as a Large-Scale System

In the late 60s and early 70s, the study of steelworks, petrochemical plants, power systems, transport networks and water systems received a special attention. The interest was motivated by the hopes that building adequate management and control systems for such enterprises will lead to operation improvements with important economic effects and savings in material consumption. Enterprises were viewed and modelled as large-scale complex systems. Their structure of interconnected sub- systems was the main common characteristic feature of such systems (Mesarovic et al. 1970; Findeisen 1982; Findeisen et al 1980; Jamshidi 1983). Several types of interconnections could be identified, such as: • Resource and objective sharing interconnections at the system ; • Flexible interconnections through buffer units for material stocking which were designed to attenuate the possible differences in the operation rates of processing plants which fed and drained inventories; • Direct interconnections which were established between plants, as in rolling mills or electric power networks, where buffer units were not allowed for technical reasons. In the 90s, large-scale systems became more and more complicated and complex under the influence of several factors, such as: The trend to continuously integrate enterprises among themselves and with their material supplies and product distributors. New paradigms, such as extended or virtual enterprises started to be used. A particular form of integration which has received a serious attention from academia people and business circles after the year 2000 is the double-loop (forward and reverse) remanufacturing complex which implies a coordination among various enterprises serving different and comple- mentary purposes such as: products manufacturing, selling and servicing, EOL (End of ) goods collecting, selection, disassembly and re-utilization (Ilgin and Gupta 2012); The variety of technologies which, belonging to different domains, such as mechanics, electronics, and information technology and communications, were used in the ever larger number of interacting subsystems; The diversity of cultures of the people involved, in particular, experts and designers who possessed different domain knowledge made them encounter com- munication problems hard to solve. Besides, process operators and people in charge with maintenance tasks who have to handle both routine and emergency situations, 1.1 The Evolving Controlled Object 3 sometimes possessed uneven levels of skills, training and even habits (Mårtenson 1990). At the beginning of the new Millennium, the management and control problems became even more complicated due to several factors, such as follows. The new market requirements for increased product variety, complexity and customizations. As Camarinha-Matos et al. (2009) noticed “even the notion of product is changing, given place to notion of extended products, under which, besides the physical product itself, associated services, and knowledge become very important”. Extended product concept is characterized by intelligence, real-time self-diagnosis and maintenance and traceability. Consequently new subjects of preoccupation show up such as sustainability, social responsibility, and full life cycle consideration. At the same time, more people are involved in decision– making activities. A special remark about the rather indirect IT-mediated collabo- ration between the producer-enterprise and the consumer in product design and marketing deserves mentioning. As anticipated by Toffler (1980) in the description he made of the “Second Wave Society”, characterized, among other things, by symptoms such as mass consumption, the consumer’s role and influence power have been increased and diversified. The prosumer, a term, resulting from the combination of “producer” and “consumer”, coined by Toffler, influences more and more the current developments through the modern technologies of Recommender systems and Business Intelligence and Analytics (to be presented in Sect. 4.1). At present, the concept of crowdsourceing is getting ever more traction in various domains including collaborative decision-making (Chiu et al. 2014). It is viewed as “the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call” (Howe 2006). The subject will be developed in Sect. 3.1.4. The business and technology driven trends to ever more integrate people and even machines at intra-and inter-level enterprises (Nof 2007; Nof et al. 2006; Panetto et al. 2012). There are corresponding timely released standards, for example, ISA 95 which is meant for the integration of enterprise and control sys- tems (Scholten 2007; Brandl 2012; Unver 2013). ISA 95 is accompanied by other standards such as: (a) IEC (International. Electronic Commission) 062264, which is based on ISA 95 for business and manufacturing integration, (b) B2MML (Business to Manufacturing Markup Language) of WBF (World Batch Forum). The advanced smart factory concept to achieve integration through the usage of Internet of things was launched and documented (Zuehlke 2008, 2010; Thoben et al. 2014; Zamfirescu et al. 2013, 2014). The increased importance of system of systems (SoS), which can be met in military and civil applications, such as critical infrastructures of computer net- works, transportation systems, power, gas and water networks and so on, represent a particular class of very large, complicated and safety-critical systems. Sage and Cuppan (2001)define SOS as non-monolithic entities characterized by features, 4 1 Collaboration and Decision-Making in Context such as: (a) geographic distribution, (b) operational and managerial independence of composing subsystems, (c) emergent behaviour and (d) evolutionary development; The sustainable, environmentally conscious development (Ilgin and Gupta 2012; Seok et al. 2012). Environmental standards and regulations have been released. A relevant example is ISO 14040/2006: “Principles and framework for Life Cycle Assessment-LCA” (Finkbeiner 2013).The standard addresses several aspects such as: (a) definition of the goal and scope of the LCA, (b) the life cycle inventory analysis phase, (c) the life cycle impact assessment phase, (d) the life cycle interpretation phase, (e) reporting and critical review of the LCA, (f) limita- tions of the LCA, (g) the relationship between the LCA phases, and so on. Consequently, new subjects of preoccupation show up such sustainability, social responsibility, full lifecycle consideration. The major impact of new information and communication technologies— I&CT (Nof et al. 2006, 2015; Bughin et al. 2010). In an attempt to define new e-activities (e-manufacturing, e-work, e-service), Nof (2003) stated

as power fields, such as magnetic fields and gravitation influence bodies to organize and stabilize, so does the sphere of computing and information technologies. It develops us and influences us to organize our work systems in a different way and purposefully, to stabilize work while effectively producing the desired outcomes. A complementary view is expressed by Wierzbicki (2010), who identifies three megatrends of the information revolution: • the technological megatrend of digital integration, also called the megatrend of convergence, which is characterized by the possibilities to transform, transmit and process in a uniform digital form all signals, measurements, messages and so on: • the social megatrend of dematerialization of work, also called the megatrend of change of profession, which may have the collateral impact of creating pre- misses for equal chances for women: • the intellectual megatrend of changing the perception of the world. The new I&CT stimulate collaboration processes in education and learning (Ciurea 2009). In the future, a data-driven society is forecast by Power (2015). All above developments lead to following conclusions: • There are ever more people involved in design, control and management decision-making. This means there are necessary intensive collaborative design, management and control activities and large-scale communication from any place at any time. • The corresponding decision and related problems become more complex, multi-facet, and, in a great number of cases, must be solved in real time. • There is a real need and a market for advanced tools to support communication and collaboration. 1.1 The Evolving Controlled Object 5

1.1.2 Adopted Terminology

The concept of collaborative network (CN) was proposed by Camarinha-Matos and Afsarmanesh (2005):

CN is a network consisting of a variety of entries (e.g. organizations, people and even machines) that are largely autonomous, geographically distributed and heterogeneous in terms of their operating environment, culture, social, capital and goals, but collaborate to better achieve common or compatible goals and whose interactions are supported by computer networks. Camarinha-Matos et al. (2009) propose a terminology, which is adopted and used throughout this book, to characterize various forms of interactions among different entities: • Networking, which reflects the setting of communication and information exchange for mutual benefit of the entities involved; • Coordinated networking, which implies, in addition to networking, harmonizing the activities to achieve more efficient results. • Cooperation, which, besides coordinated networking, implies resource sharing to achieve compatible goals of the entities; • Collaboration, which, besides cooperation, means the entities work together and share responsibilities to jointly plan, implement and evaluate a program of activities to achieve the common goal to jointly generate values.

1.1.3 Classification

In (Camarinha-Matos 2009; Camarinha-Matos et al. 2009), the general class of collaborative networks (CN) is decomposed into particular subclasses in accor- dance with several criteria as follows: • organizational level: a CN can be either a collaborative networked organization (CNO) or an ad hoc scheme; • collaboration purpose: a CNO can be either a goal-oriented network or a long-term strategic network The same authors propose taxonomy of manifestation forms of collaborating networked organizations. Several examples of goal-oriented network (GON) are given below: • virtual organization (VO) represents a temporary GON of legal persons that share their resources to achieve a common goal and whose operation is sup- ported by a computer network A dynamic VO is a short-term VO which is dissolved after its goal is achieved; 6 1 Collaboration and Decision-Making in Context

• virtual enterprise (VE) is a particular case of VO made up of profit organizations (called enterprises) that are allied to respond to business opportunities; • extended enterprises (EE) is a special case of VE characterized by the existence of a dominant enterprise; • virtual team (VT) is a temporary GON composed of professionals to achieve a common goal by using the computer network as an interaction means. Several examples of long-term strategic network (LTSN) subclasses are: • virtual organization breeding environment (VOBE) is an association of orga- nizations and supporting institutions that agreed to cooperate on a long-term basis, sharing a set of common operating rules and a certain part of their technical resources, in order to be prepared for a quick setting of temporary alliances to collaborate as VE/VO; • industrial cluster (IC) is a particular subclass of VOBE characterized by a common business sector and the same geographical area; • business ecosystem, sometimes called digital ecosystem is a VOBE which aims at preserving local specificities of a geographical region; • collaborative virtual laboratory (CVL) is an alliance of autonomous research organizations which can quickly become a VO/VT, when an opportunity-based collaboration is perceived. The conclusions to be grasped at the end of this section are: • To efficiently manage and control the evolving organizations and their con- stituents, it is highly recommended to exploit the characteristic features of the controlled object; • The information and communication technologies have been changing the business models and the structure of the enterprises and institutions; • Ever more interactions among enterprises, their constituents, men and machines have become necessary, possibly due to technology support, and quite fre- quently encountered.

1.2 From Hierarchical Control to Cooperative Schemes

In the previous section, the modern organization was presented as a complex system. In such a system, the management and control problems are complex and, sometimes, have to be solved under the pressure of time. As Cassandras (2001) remarked “the computational power alone does not suffice to overcome all diffi- culties encountered in analyzing, planning and decision making in the presence of disturbances”. In this section, a review of management and control schemes is made following the presentation lines of (Filip 2008; Filip and Leiviskä 2009; Monostori et al. 2015). 1.2 From Hierarchical Control to Cooperative Schemes 7

1.2.1 Hierarchical Systems Approach

The Hierarchical multilevel systems theory was developed in the 1970s to facilitate solving large-scale complex computation and management and control problems by using the available information and automation technologies of that time. It was mainly based on the following principles (Cassandras 2001): • A suboptimal solution could be viewed as a satisfactory one for a large-scale complicated problem. • The particular structure of the problems to be solved associated with the objects to be studied or/and controlled could and should be exploited in order to decompose the original complex and complicated problem or controlled object into a set of more reasonable size sub-problems subsystems, respectively. There are three main subclasses of multilevel structures, such as: (a) multi-strata, (b) multi-layer and (c) multi-echelon which can be obtained by decomposing the original problem or task according to the complexity of description, [automatic] control task (or frequency of disturbances) and organization, respectively (Mesarovic et al. 1970). The most relevant hierarchy for this book is the multi-echelon one which will be briefly described in the sequel (Fig. 1.1). The concept of multi-echelon hierarchy is drawn from military, industrial, and social multilevel organizations. It is adopted when a centralized management/control scheme is neither technically possible nor economically fea- sible. The basic idea is to replace the centralized unique decision/control unit by a set of specialized decision/control units which are placed on various levels of a hierarchy. Those units might have different information bases and even sets of objectives. The allocation of problems or tasks to various decision/control units can be viewed as a division of work which is both vertical and horizontal. Since the units might have different goals, a coordination mechanism is compulsory to make them working together harmoniously to accomplish a collective set of tasks (Malone and Crownston 1994; Van de Ven et al. 1976). In a multi-echelon system, i at the ith organization level, the jth decision/control unit, D/CU j, that possesses a certain autonomy, solves the allocated sub-problem which is fully specified by the vector of coordination parameters, b iÀ1, that is received from the higher, (i-1)th, echelon. While one part of the solution obtained in each decision/control unit is sent downwards, as a subsequent coordination input, to a well-defined set of decision/control units placed on the lower, (i + 1)th, level, another part is sent upwards, as a reaction, to the corresponding unit placed on the higher echelon. 8 1 Collaboration and Decision-Making in Context

Fig. 1.1 A simple two-level multilevel system. D/CU Decision/control unit, SSy Controlled subsystem, b Intervention variable, a Reaction variable, H Interconnection function, m Control variable, u Input interconnection variable, z Output interconnection variable, y Output variable, w Disturbance

1.2.2 Towards Cooperative Schemes

The traditional multilevel schemes have been largely used in human history in command and control systems. They can be viewed as pure hierarchies in which the circulation of information is performed along a vertical axis only, up and down- wards, as intervention and reaction/reporting messages, respectively. Over the time, there have been noticed several drawbacks and limitations of pure hierarchies, especially in large and highly networked systems, such as: inflexibility, difficult maintenance, and limited robustness to unexpected major disturbances. Consequently, new schemes have been proposed. They have been characterized by (a) an increased exchange of information along both vertical and horizontal axes and (c) cooperation capabilities of the decision/control units. The evolution towards more cooperative management and control schemes was supported by advances in information and communication technologies. 1.2 From Hierarchical Control to Cooperative Schemes 9

One of the first attempts of allowing cooperation among the decision/control units is due to Binder (1977), who introduced the concept of decentralized control with cooperation. The proposed scheme assumed a limited communication among the decision/control units placed at the same level within the hierarchy. A few years later, Hatvany (1985) proposed the concept of heterarchical organization in which the exchange of information is permitted among the units placed at various levels of hierarchy. Holon, a term coined by Koestler (1970) to describe an organization scheme able to explain the life and evolution of biological and social systems, has been utilized in designing and implementing modern discrete part manufacturing systems in the form of holarchies (Valkenaerts et al. 1997; Van Brussel et al. 1997; Höpf and Schaeffer 1997; Hadeli et al. 2003; Wang and Choi 2014). In (Filip and Leiviskä 2009), it is argued that pure hierarchies and heterarchical schemes can be viewed as particular classes of the holarchy superclass (Fig. 1.2). At present, a cooperative system is characterized by the following features (Grundel et al. 2007): (a) there are more than one decision/control unit, (b) the decision of the unit influences a common decision space, (c) the units share a list of common objectives, and (d) the information is shared either in an active or in a passive manner. Nof (2007) proposed several design principles of the Collaborative Control Theory in the context of e-activities. The original set of principles includes: • CRP (the principle of Cooperation Requirement Planning); • DPIEM (Distributed Planning of Integrated Execution Method); • PCR (the Principle of Conflict Resolution in collaborative e-work;, • PCFT (the Principle of Collaborative Fault Tolerance); • JLR (the Join/Leave/Remain principle in collaborative organizations); • LOCC (the principle of Lines of Collaboration and Command).

Fig. 1.2 Holarchies: an object-oriented description. Δ “…has as particular forms…”, ◊ “…is made up of…”, n..* “n or more objects” 10 1 Collaboration and Decision-Making in Context

The principles are further refined and detailed by Nof et al. (2015, p. 33) and Zhong et al. (2015). Monostori et al. (2015) state there are various advantages of the cooperative control approaches in the context of production and logistic applications, such as: • openness (it is easier to build and change); • reliability (e.g. fault tolerance); • higher performance (due to distributed execution of tasks); • scalability (incremental design is possible); • flexibility (allowing heterogeneity and redesign); • potentially reduced cost, • spatial distribution of separated units. At the same time, the above authors do not overlook the disadvantages of cooperative control, such as: • communication overhead (e.g. time and cost of information exchange); • lack of guarantee for data security and/or confidentiality; • decision “myopia” (caused by focusing on local optima); • chaotic behaviour (e.g. “butterfly effects” and bottlenecks); • complexity of analysis in comparison to centralized and even hierarchical schemes. In the context of leadership and management decisions, Harter (2009) describes collaborative thinking:

The human organization is by its nature a collective enterprise requiring communication, coordination, and frequent adaptation to changing conditions. On these grounds, collabo- rative thinking has instrumental value […..] increasing the quality of our decisions and cultivating that “extended mind” on which we all rely. Collaborative thinking has an additional merit. It respects the dignity of each participant to whatever extent he or she can flourish as a rational being. In other words, collaborative thinking has ethical value Hitherto, there has been presented a review of the ever more automated man- agement and control schemes without making specific reference to the role of the human agent in such systems. The allocation of functions between human and machine will be analyzed in the next section.

1.3 The Role of the Human in the System

In the previous section, the evolution of control and management schemes was reviewed. The problem to be examined next concerns the place and role of the human agent in such control and management schemes. 1.3 The Role of the Human in the System 11

1.3.1 The Human in the Loop

SF writers and film producers have been proposing fearful visions of cooperative robots and computers dominating the world. Other optimistic visions used to be proposed by the enthusiastic engineers and planners who dreamed, in late 1960s and early 1970s, at unmanned factories. Until now those dreams have not come to life not only because of ethical and social reasons, but also for technical causes engineers have been aware of since a couple of decades ago. More than four decades ago, Bibby et al. (1975) stated that

even highly automated systems […] need human beings for supervision, adjustment, maintenance and improvement. Therefore, one can draw the paradoxical conclusion that automated systems still are man-machine systems for which both technical and human factors are important. Similar views were later expressed by Rasmussen (1983) and Parasuraman and Wickens (2008). The following questions can be formulated:

• Q1: Is there still any place for the human agent in the highly automated systems of the present day? • Q2: Which functions are allocated to be executed by automatic control devices and computers and which tasks remain to be carried out by humans? • Q3: To which extent can the management and control tasks be automated? To answer the first questions, let us first define automation. The schemes described in the previous section can be implemented by human agents or by computers or by combined human-machine units. When a computer or another device executes certain functions that the human agent would normally perform, we speak of automation (Parasuraman et al. 2000, 2008). Parasuraman and Wickens (2008) noticed that modern automation has pervaded not only in most safety-critical systems, such as aviation, power plants or intensive care units, but also in transportation, home, various robotized environments, entertainment and even intelligent cloths. It was perhaps Bainbridge (1983) who explained best, in the context of process control, the irony that the more advanced and automated the system is, the more crucial may be the role of the human agent. He pointed out that irony of automation is caused by two factors: • the human nature of the system designers who want to eliminate the unreliable and inefficient human operator, and • the nature of the remaining tasks to be carried out by human operators. Bainbridge identified two ironies of automation: • The first irony of automation: the designer, a human being, may also be an imperfect person and, consequently, a new major source of operating problems. 12 1 Collaboration and Decision-Making in Context

• The second irony of automation: the designer is not able to automate some tasks and leaves them to be carried out by unreliable and inefficient operator who is to be eliminated from the control scheme. In a more general context, Drucker (1967, p. 174) viewed the computer as a moron. He stated that

The computer makes no decisions; it only carries out orders. It’s a total moron, and therein lies its strength. It forces us to think, to set the criteria. The stupider the tool, the brighter the master has to be—and this is the dumbest tool we have ever had. The technology evolved over the last decades and there is a significant potential to automate a great number of activities and to replace the human operator or decision maker in several activity domains. Dewhurst and Willmott (2014) noticed that

After years of promise and hype, machine learning has at last hit the vertical part of the exponential curve. Computers are replacing skilled practitioners in fields such as archi- tecture, aviation, the law, medicine, and petroleum geology—and changing the nature of work in a broad range of other jobs and professions. Deep Knowledge Ventures, a Hong Kong venture-capital firm, has gone so far as to appoint a decision-making algorithm to its board of directors. In spite of the remarkable advancements of automation, there are still domains where the potential of automation are limited. Having carried out an extensive survey of the US market, Chui (2016) find out that

The hardest activities to automate with currently available technologies are those that involve managing and developing people (9 percent automation potential) or that apply expertise to decision making, planning, or creative work (18 percent). The conclusion is obvious: the human cannot be totally eliminated and should be present in the loop in those activities where creativity, knowledge usage and … instinct of self-preservation are requested.

1.3.2 Allocation of Functions and Levels of Automation

Let us now examine the second question, Q2, about allocation of functions to human and machine. Rasmussen (1983) identified three classes of behaviours of human agents and the associated information processing and management schemes. The possible types of behaviour are: (a) skill-based, (b) rule-based, and (c) knowledge–based. They are characterized by the different types of information utilized (signals, signs, and symbols) and the actions performed (Sheridan 1992, p. 18): • The skill-based behaviour (SBB), which is met at the lowest level of control.It represents “sensory-motor, performance during acts or activities which, fol- lowing a statement of intention, take place without conscious control as smooth, 1.3 The Role of the Human in the System 13

automated and highly integrated patterns of behaviour”. The sensed information is perceived in the form of continuous quantitative indicators (called signals)of time-space behaviour of the environment. • The rule-based behaviour (RBB), which is based on the solution of the situa- tions previously met and solved by the human agent him/herself or by the experts who have trained him/her. The information perceived (called signs) activate predetermined human actions in accordance with previous experience or predefined conventions. • The knowledge-based behaviour (KBB), which is to be met at the higher levels of management and control levels when non-routine situations are faced and no predefined rules are available. In this case the pieces of information that must be perceived (called symbols) are used for reasoning in humans’ explicit goal-oriented activities. In the remaining part of this book, the interest of presentation will be focused on KBB and, to a lesser level, on RBB. From the description of the three behaviour classes, one can draw the conclusion that functions that can be carried out by a human’s SBB can be automated to a quite high extent. On the contrary, the tasks that require a KBB are more difficult and, perhaps, less desirable to be automated. One of the earliest and well known scheme for task allocation between human and machine is Fitts’ list also called MABA- MABA (“Men are better at—Machine are better at”) list (Fitts 1951, p. 10; De Winter and Dodou 2014). The list contains 11 statements to serve the designer in deciding which functions should remain to be performed by humans (six state- ments) and which ones are recommended to be automated (five statements) based

’ Table 1.1 The original Fitts “Humans appear to surpass present-day machines with (1951) MABA-MABA list respect to the following: 1. Ability to detect a small amount of visual or acoustic energy; 2. Ability to perceive patterns of light or sound; 3. Ability to improvise and use flexible procedures; 4. Ability to store very large amounts of information for long periods and to recall relevant facts at the appropriate time; 5. Ability to reason inductively; 6. Ability to exercise judgment. Present-day machines appear to surpass humans with respect to the following: 1. Ability to respond quickly to control signals and to apply great force smoothly and precisely; 2. Ability to perform repetitive, routine tasks; 3. Ability to store information briefly and then to erase it completely; 4. Ability to reason deductively, including computational ability; 5. Ability to handle highly complex operations, i.e. to do many different things at once.” 14 1 Collaboration and Decision-Making in Context on the evaluation of human and machine information processing and actuating capabilities (Table 1.1). The list of Paul Fitts can be viewed as a starting point for a series of methods meant to decide which function should be allocated to the machine (in most cases, a computer) and which to the human agent. In the sequel, the method of Parasuraman, Sheridan, and Wickens (2000, 2008) will be adopted. The method starts from the model of human information processing consisting of four stages: (a) sensory processing (b) perception/working memory (c) decision-making, and (d) response selection. Then four generic independent functions are defined: 1. information acquisition (IAc); 2. information analysis (IAn); 3. decision selection (DSe); 4. action implementation (Aim). In the remaining part of the book, the interest will be focused on the second and third functions: information analysis and action implementation. Parasuraman, Sheridan, and Wickens (2000) remark “automation is not all or none and can vary across a continuum of levels, from the lowest level of fully manual performance to the highest level of full automation”. This remark can set the stage for examining the third question, Q3 (the extent of automating the manage- ment and control functions). Sheridan and Verplanck (1978) proposed a ten-level scale of automation (Table 1.2.) Parasuraman et al. (2000) subsequently remarked the scale was relevant only for decision selection and action implementation. Save and Feuerberg (2012) proposed a new Level of Automation Taxonomy (LOAT) which is organized in accordance with generic functions described above. For each function, a specific set automation levels was proposed in the context of aviation systems. Since his book is focused on [human] decision making activities, the presenta- tion will be limited to relevant levels of IAn and DSe functions (Tables 1.3 and 1.4.)

Table 1.2 Levels of 1. “The computer offers no assistance; human must take all automation (Sheridan and decisions and actions Verplanck 1978) 2. The computer offers a complete set of decision/actuation alternatives, or 3. narrows the selection down to a few, or 4. suggests one alternative, and 5. executes that suggestion if the human approves, or 6. allows the human a restricted veto time before automatic executions, or 7. executes automatically, then necessarily informs the human, and 8. informs the human only if asked, or 9. informs the human only if the computer decides to 10. The computer decides everything, acts autonomously, ignores the human” 1.3 The Role of the Human in the System 15

Table 1.3 LOAT for • Level 0. Human Decision Making: A (The human agent Decision and Action Selection generates decision alternatives, selects the appropriate ones, (adapted from Save and chooses the one to be implemented); Feuerberg 2012) • Level 1. Artifact-Supported Decision Making: A and B (the human agent decides all actions to implement the selected decision by utilizing paper or other non-digital artifacts (for example, telephone); • Level 2. Automated Decision Support: C (The human agent selects a solution from the set composed of alternatives generated by computer and him/herself); • Level 3. Rigid Automated Decision Support: D (The human agent can select a solution from the set of alternatives generated by the system or asks for new alternatives)

Table 1.4 LOAT for • Level 0. Working Memory-Based Information Analysis: E Information Analysis (The human agent compares, combines and analyses different (adapted from Save and information items) and F (No any other tool or support Feuerberg 2012) external to his/her working memory); • Level 1. Artefact-Supported Information Analysis: E and G (Paper or other non-digital artifacts are utilized); • Level 2. Low-Level Automation Support of Information Analysis: H (Based on user’s request, the system helps the human agent to compare, combine and analyze different information items); • Level 3. Medium-Level Automation Support of Information Analysis: H and I (The system triggers various alerts, if the analysis produces results which require attention of the user); • Level 4. High-level automation support of information analysis: J (The system helps the user to compare, combine and analyze different data items concerning the controlled/managed object by using the parameters specified by the user) and I; • Level 5. Full automation support of information analysis: K (The system performs comparisons and analyses data available about the controlled object based on parameters specified by the designer or a higher level decision maker) and I where the capital letters, A, B…, L, indicate the statements that characterize the levels. Figure 1.3. presents the typical automation levels of information analysis and decision selection functions allocated to computer in the specific class of information systems called decision support systems (DSS) that will be described in details in Chap. 2. Save and Feuerberg (2012) remark that “the analysis of human-automation interaction in real situations shows that even automation tools with high technical capabilities may not provide the desired benefits or it may be rejected”. The aspects of automation effectiveness will be examined in the next section. 16 1 Collaboration and Decision-Making in Context

Fig. 1.3 Levels relevant for DSS domain (inspired by Parasuraman et al. 2000)

1.3.3 The Need for Effective Computer Supported Collaboration

The evolution toward the collaborating networked organizations and the more cooperative management and control schemes of the present day have naturally led to the need of involving more people in solving the ever more complex problems. In many cases, an individual is neither empowered, nor possesses the necessary knowledge to solve such matters. Workgroups and teams frequently have to make or negotiate management and control decisions and become a fundamental part and, at the same time, a characteristic feature of the new organization paradigm (Lewis 2010). The remark of Mintzberg (1971, 1973) concerning the high proportion of managers’ time spent in communicating and meetings is apparently still valid. The modern technology has made information collecting easier and communication more efficient and less time consumming. Traditional meetings have caused the workgroups and teams encounter various difficulties and consume the time in a not very effective way. Lewis(2010) enumerate several causes the traditional meetings are not always effective in problem solving: (a) failure to adequately address and define problems, (b) pressure for conformity which may result in group-thinking, (c) de-individualization and diffusion of responsibility, which might lead to extreme decisions, or, alternatively, and social loafing some of the group members engage in. Consequently, there is a real need and place for computer supported collabo- rative groupwork. 1.4 Towards Anthropocentric Information Systems 17

1.4 Towards Anthropocentric Information Systems

In the previous section, our conclusion is that there is a need for human information processing and acting in a significantly large number of real-world applications of computers and automation schemes. In this section, the need for more human-oriented solutions is analyzed.

1.4.1 Several Questions and Answers

A series of questions about the interaction between human and the information system (an artifact) is formulated (Filip 1995, 2007):

• Q1: Can the information system be a tool or a computerized adviser that sup- ports the human agent to perform his tasks? • Q2: If the answer to Q1: is positive, to what extent the human agent is supposed to perform his tasks in a more effective manner? • Q3: What is the impact of the artifact on the user’s status and professional life quality? • Q4: To what extent can the “services” provided by an artifact be adapted to the dynamics of human agent behavior? The vast majority of early information systems which were designed and implemented were underutilized, in spite of IT providers and consultants’ promises and allocated budgets, because they used to be “unreliable, intolerant, insufficient and impersonal” (James 1980). The intolerance was caused by the need to use an absolutely correct stream of instructions and operations to make the system operate in a proper way. The insufficiency was explained by the fact the end users who did not possess sufficient training in computer usage frequently asked the IT profes- sionals to solve operation problems. The early systems were impersonal because the functions provided and interfaces available were little adapted to personal characteristic features, in particular, to skills and even moods of end users. At present, the reliability, intolerance and insufficiency problems are solved to a quite large extent due to progresses made in IT and users’ education. Consequently, the usage of information systems is no longer viewed as an additional burden, but as an essential support means for the end user to perform his/her tasks in a more effective and comfortable way. The adaptability to user’s dynamically changing character- istic features and behaviour is still a problem solved in a rather limited extent only. To answer the second question, one should make an analysis of the information system performances with respect to its efficiency, which is determined by the extent the functions of the artifact are necessary and useful. On the other hand, the effectiveness of a particular system means the “services” of the artifact are offered to a particular end user (or all members of a group) in a usable and unambiguous 18 1 Collaboration and Decision-Making in Context manner at a reasonable price. Mårtenson (1990) qualifies a system as informa- tionally opaque, in case a particular end-user either cannot obtain the necessary information, or is “flooded” by data which are unnecessary or unusable. This might lead to losing the confidence in the utility of the system. In the context, we could notice the dual phenomenon: the overconfidence in the information provided by the computer-based artifact. Figueira et al. (2005, p. XXIX) warned, in the context of multicriteria decision analysis (MCDA) that: “Software is a tool and should be used as a tool. Before using a software, it is necessary to have a sound knowledge of the adopted methodology and of decision problem at hand”. The syndrome of overconfidence in the decision suggestions provided by the information systems might have unfavorable consequences especially at the lower levels of management and control structures. Two decades ago, Johannsen (1994) stated that “the trend to take in a technology-oriented approach, several decision tasks might make the human agent lose professional skills and eventually to boredom in normal condi- tions and even catastrophic decisions in crisis situations”. Consequently, the quality of professional life of the end user can significantly worsen. The current methods and technologies to profile the user with respect to his/her information acquiring interests and, to a lesser extent, operation skills represents an obvious adaptation feature of the current information systems to the slowly varying characteristics of the human. Adaptation to faster varying characteristics, such as an emotional state is currently in a real progress due to biometric interfaces (Kaklauskas 2015). This may be the answer to the fourth question. The subject will be detailed in Sect. 4.5.

1.4.2 Attributes

Having analyzed the questions and answers of the previous section, we estimate that the general objective articulated by Filip (1989, 1995) is still valid:

It is necessary to design and implement information systems that are useful and usable. At the same time, they should stimulate users to new skills and knowledge and, possibly, adopt new working styles which allow for exploiting the individual creativity and intellectual capabilities and make the system effectively used. From the above general objective, several classes of specific objectives can be derived: 1. The range of functions and services the system provides should be broad enough to support the end-user to solve a large set of problems and decision situations. In particular, the system should be flexible and adaptive to user’s characteristics and dynamic variations of his/her behaviour and intentions. A learning capa- bility to update the existing stored knowledge is to be appreciated (Piramuthu and Shaw 2009); 2. The structure and the operation of the system should be transparent and infor- mation must be offered together with explanations and justifications; 1.4 Towards Anthropocentric Information Systems 19

Table 1.5 Anthropocentric How does the system support performing the task? systems (Filip 1995) Old: unreliable, intolerant, insufficient, impersonal; Current: reliable, usable (user friendly), sufficient and, [to a certain extent] personalized How are affected the working conditions? (collateral results) Current: the system needs a [limited] training, and is comfortable to be utilized; Desired: it should stimulate creativity and knowledge enrichment Evolution: from supporting tools to “intelligent assistants” and “coaches” Services provided: broad range, extensible informational transparency, adoptive to user variable characteristics Technology: functional transparency Development: continuous evolution, standards are observed

3. The system should be extensible, to include new functions which are directly defined by the end-user him/herself; A synthetic presentation of the attributes of anthropocentric information systems is contained in the Table 1.5.

1.5 Decisions and Decision Units

At the beginning of the chapter it was stated that various management and control decisions are made by managers, other knowledge workers, and process operators. In this section, a review of basic aspects of decision-making is made with particular emphasis on multi-criteria decision models (MCDM).

1.5.1 Definitions

1.5.1.1 The Decision

There are many definitions of the noun decision one can found in literature (Simon 1960; Mintzberg 1973; Bonczek et al. 1984). In (Filip 2007,p.9,2008; Stefanoiu et al. 2014, p. 313), the following definition of the decision is adopted:

The decision is the result of human conscious activities aiming at choosing a course of action for attaining a certain objective (or a set of objectives). It normally implies allocating the necessary resources and it is the result of processing information and knowledge that is performed by a person (or a group of persons) who is empowered to make the choice and is accountable for the quality of the solution adopted to solve a particular problem or situation. 20 1 Collaboration and Decision-Making in Context

There are several keywords in the definition adopted, such as: • Making a choice within several possible courses of actions (commonly named alternatives) is the central keyword. Sometimes, the alternatives can be identi- fied in the decision environment. Other decision-making activities require designing the alternatives as it happens when setting the price of an IT product or the fee for consulting services to be offered on the market or the production rates of the plants within a continuous process industry enterprise. • The decision is made by a human agent or a decision unit (also called group) composed by several persons) in a conscious purpose-oriented set of activities, to solve a certain decision situation in such a way to satisfy a set of objectives. An automated artefact does not make decisions, but implements a control law or executes computer programs which are designed by a human.

1.5.1.2 Decision Situations

The decision situations and the associated decision problems may be forced or unforced. The forced (sometimes called objective) situations may be caused by several factors that are beyond the decision-maker will such as: (a) intolerable deviations from a normal or designed state (for example, the failure of control system which requires replacement); (b) perceived or forecast changes in the decision environ- ment (for example, anticipating the opportunity for releasing new IT products on the market); (c) new states attained (for example, the new networked structure of the enterprise). All the above cases require reactive decisions that are meant to correct undesirable situations or to exploit opportunities perceived. The unforced (or subjective) decision situations are caused by various factors related to decision-maker such as: (a) changing the objectives or the levels of aspiration of the decision maker(s), (b) the preoccupation to prevent possible undesirable states which might show up even though no symptom is perceived to anticipate such unpleasant situations. The results of solving such problems are called proactive decisions.

1.5.2 Possible Approaches

There are several possible approaches to solve a problem associated to a decision situation. The econological (economically logical) model of the decision activities assumes that the decision-maker is fully informed and aims at optimizing one or several performance indicators in a rational manner. In this case, the DM process consists in a sequence of steps such as: (a) problem statement, (b) definition of the criterion (or criteria, if multi-attribute models are utilized) for the evaluation of decision alternatives, (c) listing and evaluation of all available alternatives, 1.5 Decisions and Decision Units 21

(d) selection of the ‘‘best’’ alternative and (e) its release for execution. Consequently, the mono- or multi-criteria optimization (Borne et al. 2013; Stefanoiu et al. 2014) can be applied.

1.5.2.1 The

H. Simon (1955) realized the time and cost limits of using optimization in decision-making activities and envisaged an approach which is obviously closer to the way the human decision maker behaves in real, every-day life. He noticed that in nature “the organisms adapt well to ‘satisfice’; they do not optimize” (Simon 1956). Time constraints, the shortage of adequate data, the prohibitive costs of information collecting and processing or even the lack of confidence in the results provided by the computerized optimization algorithms not well explained and fully understood may lead the decision-maker to accepting a suboptimal satisficing (a term which resulted from the combination of “satisfy” and “suffice”) solution, instead of running for an optimal one. The basic idea of the approach is simple and practical. The economically rational man (REM) of the neo-classical economics (Hollis and Nell 2007), who is tempted and able to make a choice by using opti- mization, is replaced by a decisional organism that possesses limited knowledge and resources for information collecting and processing. The problem is, conse- quently, simplified by various operations such as: (a) limiting the size of the set of alternatives taken into consideration and (b) replacing a complicated performance indicator by a simpler one of threshold-type. A certain aspiration level is set to evaluate the perceived utility of alternatives which are sequentially explored. In case a satisfactory alternative is not found, the aspiration level is to be lowered. On the contrary, when the satisfactory solution is quickly found, the aspiration level may be raised and, eventually, a near optimal solution can be obtained. The of identifying and evaluating the alternatives is based on common sense rules called heuristics (Simon 1957). Simplifying assumptions together with the sequential evaluation of alternatives, instead of a synoptic one, lead to the concept of bounded rationality (Simon 1960/1977; Barros 2010).

1.5.2.2 The Implicit Favourite

An approach which is quite often met in real life is based on the implicit favourite solution. The decision-maker, though has in mind a favourite course of action, performs, nevertheless, the formal steps of a systematic procedure, with a view to confirming the results he/she expects in a respectable and apparently objective way. Keeney and Raiffa (1999, p. 9) give several reasons for such an approach such as: • the “psychological comfort” that might be felt by the decision-maker, when his intuition is legitimated by an apparently systematic and rigorous procedure; 22 1 Collaboration and Decision-Making in Context

• the facilitation of the communicating the choice made, if the result is presented in a systematic way; • the advocacy or justification of the decision to others, which may be, sometimes, followed by a reconciliation phase.

1.5.2.3 Multiparticipant Decisions

Several decades ago, Gray and Nunamaker (1993) cited the remark Keen made at the Closing Plenary session of the First International Conference on Decision Support Systems (DSS):

The fundamental model of DSS–the lonely decision-maker striding down the hall at high noon to make a decision–is true only in rare cases. In real organizations, be they public or private, Japanese, European or American, most decisions are taken only after extensive consultation. Although it is possible on occasions for decision makers to go counter to the consensus of their organizations, this is not a viable long-term position for those decision makers out of tune with the people in their organization either depart the organization or the organization undergoes massive personnel turnover. At present, in the context of the extended/virtual/networked enterprise, ever more decisions are made by a group of collaborating persons, instead of an individual A remark and a warning are necessary to be made related to group (also called multiparticipant) decisions. De Michelis (1996) stated that “The group decision may be the result of either combining individual solutions or selecting and adopting one individual decision. Consequently it might not be rational in Simon’s accep- tance. It is not necessarily the best choice or combination of several individual decisions, even though those might be, each of them, optimal, because various persons involved might have different perspectives, goals, information and knowledge bases and criteria of choice. Therefore, group decisions show a high social nature including possible conflicts of interest, different visions, influences and relations”. The subject will be addressed in details in Sect. 2.3.4.1, 3.2, and 3.3.

1.5.3 Multicriteria Decision Models

The basic elements of a multicriteria decision model are (Clemen 1996):

• The set of alternatives, A i(i =1,2,…, na), which are identified or designed; • The objectives pursued, O j (j =1,2,…, no); • The constraints C (k =1,2,…, nr) 1.5 Decisions and Decision Units 23

• The corresponding evaluation criteria, EC j (j =1,2…, nc; nc = no + nr) and their corresponding weights, w j (j =1,2,…, nc); • The approach adopted to recommend an ordering of alternatives (see Sect. 1.5.2 above).

1.5.3.1 Example

To illustrate the above concepts, let us consider the following decision problem. Mr. X, a young engineer, has to design and implement a new information system for supporting the work of the people of the department D. He identifies, helped by Ms. B, the head of department D, the following alternatives:

• A1: designing the system from the software pieces which are available on the market; • A2: adapting a system generator available for the application domain; • A3: using the services provided by a cloud computing company. Mr. X, an ambitious person, would prefer the first alternative which allows him to deploy his knowledge and creativity. He also counts on the support of Ms. B and her colleagues, the end-users who feel this solution could be more flexible and better adapted to their needs, skills, and habits. At the same time, Ms. Y, the chief-accountant, thinks the third alternative (cloud computing) could be more economical since no costs for new software versions, training the people and so on are foreseen. In his turn, Mr. W, the head of the company, a very experienced person, is cautious about the usage of cloud computing. He does not want to become a “captive client” of a certain service provider and is not sure the data of his company are completely protected. He would prefer an “in house” system. At the same time, he desires a rapid implementation and, consequently, he is not happy with alternative A1 either because A1 might not be a quick enough solution. Moreover, Mr. W would prefer a system from a single provider of a high reputation of quality. Consequently, he is in favour of alternative A2.

1.5.3.2 Objectives and Criteria

From the above description, it results there are at least four objectives to be pursued:

• O1: the most flexible solution; • O2: high adaptation to the needs expressed by the end-users; • O3: the most rapid implementation; • O4: high and sound reputation of the IT provider. When a metric is associated with an objective, one defines an evaluation cri- terion that serves to measure the merit, named score s ij;i=1,2,3;j =1,2,3,4), of each alternative, Ai, with respect to a specific aspect or objective, Oj. 24 1 Collaboration and Decision-Making in Context

The set of criteria may be utilized for the following purposes: • filtering the alternatives that are not placed between the acceptable limits, named thresholds; • comparing and ordering the remaining acceptable alternatives. A number of requirements for setting the criteria are proposed by Keeney and Raiffa (1999, p. 39): • Completeness: the criteria must cover all relevant aspects that determine a choice; • Nonredundancy: certain aspects must be reflected by no more than one evalu- ation criterion; • Operability: the criteria should be comprehensible by all persons involved in decision making and allowing the measurement or qualitative evaluation of the merit of the alternatives; • Workable size: there should be a reasonable tradeoff between the desire to consider all relevant aspects and the need to operate with a manageable set of criteria. It is beyond the purpose of this book to present the methods for solving multi-criteria problems. The interested reader can find some of them described together with references to other published materials, specialized journals, and associated software products in (Clemen 1996; Kirkwood 1997; Keeney and Raiffa 1999; Figueira et al. 2005; Stefanoiu et al. 2014: Chap. 4 and 5; Zavadskas and Turskis 2011; Clemen and Reilly 2014; Zavadskas et al. 2014, 2015, 2016).

1.5.3.3 Evaluation Criteria or Evaluators?

From the example of the previous section, the reader could draw several conclusions: • In choosing an action course, there may be more than one participant involved in several decision-making activities, such as: identifying the alternatives and articulating the subjects of interest, objectives, and corresponding criteria; • An individual may have more than one subjects of preoccupation and corre- sponding objectives; • In the vast majority of real world problems, the set of evaluation criteria reflects the preferences of several interested persons (evaluators); • The decision power of the people involved in decision-making is reflected in the weights associated with the various objectives and corresponding evaluation criteria. There are several rules to set adequate weights (Zavadskas et al. 2010; Riabacke et al. 2012; Stefanoiu et al. 2014, p. 321; Zavadskas and Podvezko 2016). When a decision is to be made by several persons, the following generic simple rules for aggregating individual preferences (Nitzan and Paroush 1983) are relevant: 1.5 Decisions and Decision Units 25

• R1: the simple majority rule, that indicates equal decision powers for all par- ticipants involved (w = [1, 1,…,1] T ); • R2: the rule of simple non-restricted majority, which indicates that all persons involved have equal decision powers but one (w = [2, 1, …,1]T ); • R3: the rule of simple quasi-majority that indicates some people (in the above example, the head of enterprise, chief accountant, project manager) have higher decision powers than the rest of the persons involved (for example, the simple end-users) (w = [5, 3, 2, … 1,]T ); • R4: the rule of almost unilateral decision, that indicates the view of a single person (in our example, the head of the enterprise) is almost the single one which counts (w = [10, 1, … 1]T ). The subject will be refined in Sects. 2.4 and 3.2 that address the problem of selecting the IT platform and computational social choice, respectively.

1.6 Notes and Comments

The chapter that is to be completed at this point has had a preparatory character for the more specific content of the following ones. At the same time, it was meant to bring in arguments that justify the need for computer-supported collaborative decision-making. The presentation started from describing the controlled object and subsequently moved to management and control schemes and corresponding decision supporting information tools and their desirable characteristics and possible usages. The main ideas recommended to be retained by the reader are: • Decisions are made at any management and control level. • The controlled object evolution was market, information and communication technology-driven. • At present, collaborative networks which could provide competitive advantages, especially for small and medium enterprises (SME) and professional commu- nities, are technologically possible and recommendable. • Over the time, the effective management and control schemes have exploited particular characteristic features of the controlled objects. • At present, the schemes that allow information exchange and cooperation of decision/control units are gaining ever more ground synchronically with col- laborative networked organizations. • The more automated are the management and control functions of a certain object, the more necessary might be the presence of the human in the loop, especially for non-routine, critical tasks which have to be carried out in emer- gency situations in a knowledge-based manner. 26 1 Collaboration and Decision-Making in Context

• In order to be effective, the allocation of functions between human and machine and the automation level should be made in accordance with task specific characteristics and human agents’ skills and knowledge. • A decision support system (DSS) is an anthropocentric information tool meant to increase the sustainable level of complexity of the tasks the human is to carry out and, at the same time, to stimulate his/her creativity and knowledge acquisition. • The criteria utilized in decision analysis models can either represent various perspectives of a decision maker or the priorities of different participants in problem analysis and solution selection. The following chapters will bring in more technical details on computer sup- ported collaborative works and decision-making. The presentation will address the concepts, methods and technology aspects. Chapter 2 will address the essential issues in the field of decision support systems: decision problems and activities carried out, DSS characteristic features, persons involved in decision making and system design and usage. Several particular subclasses of the general class of DSS will be reviewed too. Chapter 3 will contain relevant concepts and methods used in computer-supported collaboration, such as: collaborative decision-making compu- tational social choice, collaboration engineering. The fourth chapter will survey several essential enabling information and communication technologies which technically support decision making and collaborative activities, such as: business intelligence and analytics, web technology, social networks, mobile and cloud computing, biometrics, and serious digital games. Several particular implementa- tions are presented in the final chapter.

References

Bainbridge L (1983) Ironies of automation. IFAC J. Automatica 19(6):775–779. Barros G (2010) Herbert A. Simon and the concept of rationality. Boundaries and problems. Brazilian Journal of Political Economy, 30(3): 455–472. Bibby K S, Margulies F, Rijndorp J E, Whithers R M (1975) Man’s role in control systems. In: Proceedings, IFAC 6th Triennial World Congress, Boston, Cambridge, Mass: p. 24–30. Binder Z (1977) Sur l’ organisation et la conduite des systèmes complexes. Thèse de Docteur, LAG Grenoble (in French). Bonczek R H, Holsapple C W and Whinston A B (1984). Developments in decision support systems. Advances in Computers, 23, pp.141–175. Borne P, Popescu D, Filip F G, Stefanoiu D (2013). Optimization in Engineering Sciences: Exact Methods. J Wiley & Sons, London. Brandl D (2012) Practical Applications of ISA 95 Standard. MESA International. Bughin J, Chui M, Manyika J (2010) Clouds, big ata and smart assets. Ten tech-enabled business trends to watch. McKinsey Quarterly, 22 September. Camarinha-Matos L M (2009) Collaborative organizations. Status and trends in manufacturing. Annual Reviews in Control, 33(2): 199–208. Camarinha-Matos L M, Afsarmanesh H (2005) Collaborative networks: a new scientific discipline. Journal of Intelligent Manufacturing, 16(4–5): 439-452. References 27

Camarinha-Matos L, M, Afsarmanesh H, Galeano N Molina A. (2009) Collaborative networked organizations. Concepts and practice in manufacturing enterprise. Computers & Industrial Engineering, 57(1): 46–60. Cassandras G S (2001) Complexity made simple -at a small price. In: Filip F G, Dumitrache I, Iliescu S eds, Proceedings, 9th IFAC Symposium Large-Scale Systems: Theory and Applications, Elsevier: p. 1–5. Chiu C M, Liang T P, Turban E (2014) What can crowdsourcing do for decision support? Decision Support Systems, 65:40–49. Chui, J M, Manyika J, Miremadi J (2016) Where machines could replace humans—and where they can’t (yet). McKinsey Quarterly, July. Ciurea, C. (2009). A metrics approach for collaborative systems. Informatica Economica, 13(2): 41–49. Clemen, R. T. (1996). Making Hard Decisions. An Introduction. 2nd ed., Duxbury Press, Belmont, CA. Clemen R T, Reilly T (2014) Making Hard Decisions Tools, 3rd ed., Babson College. De Michelis G (1996) Co-ordination with cooperative processes. In: Humphrey P, Bannon L, Mc. Cosh A, Migliarese P, Pomerol J C eds, Implementing Systems for Support Management Decisions. Chapman & Hall, London: p. 108–123. De Winter J C F, Dodou D (2014) Why the Fitts’ list has pushed throughout the history of function allocation. Cognition Technology Work, 16 (1): 1–11. Dewhurst M, Willmott P (2014) Manager and machine: The new leadership equation. McKinsey Quarterly (available at: http://www.tdtsustentabilidade.org/wp-content/uploads/2015/01/ Manager-and-machine-The-new-leadership-equation.pdf accessed on 17.07.2016). Drucker P (1967) The manager and the moron. In: Drucker P, Technology, Management and Society: Essays by Peter F. Drucker, Harper & Row, New York: p. 166–177. Figueira J S, Greco S, Ehrgott M eds (2005) Multiple Criteria Decision Analysis. State of the Art Surveys. Springer Science and Business Media, New York. Filip F G (1989) Creativity and decision support systems. Studies and Research in Computers and Informatics, 1 (1): 41–49. Filip, F. G. (1995) Toward more humanized real-time decision support systems. In: Camarinha-Matos L, Afsarmanesh H eds, Balanced Automation Systems. Architectures and Design Methods. Chapman & Hall, London: p. 230–240. Filip F G (2007) Sisteme support pentru decizii, Editura Tehnica Bucuresti (“Decision Support Systems”) (In Romanian). Filip F G (2008) Decision support and control for large-scale complex systems. Annual Reviews in Control, 32(1): 61–70. Filip F G, Leiviskä K (2009) Large-Scale Complex Systems. In: Nof S Y ed, Springer Handbook of Automation, Springer Berlin, Heidelberg: p. 619–638. Findeisen W (1982) Decentralized and hierarchical control under consistence or disagreement of interests. IFAC J Automatica, 18(6): 647–664. Findeisen W, Bailey F N, Brdys M, Malinowski K, Tatjewski P, Wozniak A (1980). Control and Coordination in Hierarchical Systems. J. Wiley & Sons, Chichester. Finkbeiner M (2013) From the 40s to the 70s-the future of LCA in the ISO 14000 family. The International Journal of Life Cycle Assessment, 18(1): 1–4. Fitts P M editor (1951). Human Engineering for an Effective Air Navigation and Traffic Control System. National Research Council, Washington (DC). Gray P, Nunamaker J F (1993) Group decision support systems. In: Decision Support Systems: Putting the Theory into Practice, 3rd edition, Sprague Jr R, Watson H eds, Prentice Hall International, Englewiid Cliffs, NJ: p. 309–326. Grundel D, Murphy R, Pardalos P, Prokopye O eds (2007). Cooperative Systems: Control and Optimization. Lecture Notes in Economics and Mathematical Systems 588, Springer Science and Business Media, Berlin. 28 1 Collaboration and Decision-Making in Context

Hadeli K, Valckenaers P, Zamfirescu C B, Van Brussel H, Saint Germain B, Hoelvoet T, Steegmans E (2003) Self-organising in multi-agent coordination and control using stigmergy. In: Engineering Self-organizing Systems, Springer Berlin, Heidelberg: p. 105–123. Harter N (2009) Critical thinking in groups. Journal of Leadership Education 8(1): 111–117. Hatvany J (1985) Intelligence and cooperation in heterachic manufacturing systems. Robot. Comput. Integr. Manuf. 2(2): 101–104. Hollis M, Nell E J (2007) Rational Economic Man. Cambridge University Press. Höpf M and Schaeffer C F (1997). Holonic manufacturing systems. In Information Infrastructure Systems for Manufacturing (pp. 431–438). Springer US. Howe J (2006) The rise of crowdsourcing. Wired magazine, 14(6): 1–4. Ilgin M A, Gupta S M (2012) Remanufacturing, Modelling and Analysis. CRC Press. James E B (1980) The user interface. The Computer Journal. 23(1): 25–28. Jamshidi M (1983) Large-Scale Systems: Modelling and Control. North Holland, New York. Johannsen G (1994) Integrated systems engineering, the challenging crossdiscipline. In: Johannsen G ed, Proc. IFAC Conference on “Integrated Systems Engineering” Pergamon Press, Oxford: p. 1–10. Kaklauskas A (2015) Biometric and Intelligent Decision Making Support. Springer. Keeney R L, Raiffa H (1999) Decisions with Multiple Objectives. Preferences and Value Tradeoffs, Cambridge University Press, Cambridge, UK. Kirkwood C W (1997) Strategic Decision Making. Multiobjective Decision Analysis with Spreadsheets, Duxbury Press, Belmont, CA. Koestler A (1970) Beyond atomism and holism—the concept of the holon. Perspectives in Biology and Medicine, 13(2):131–154. Lewis F (2010) Group support systems: overview and guided tour. In: Kilgour J, Eden M eds, Handbook of Group Decision and Negotiation Springer + Business Media, Dordrecht: p. 249– 268. Malone TW, Crowston K(1994) The interdisciplinary study of coordination. ACM Computing Surveys, 26 (1): 87–119. Mårtenson L (1990) Are the operators in control of complex systems? In: Proc., 13th IFAC Triennial World Congress, vol. 8, Paragon Press, Oxford: p. 259–270. Mesarovic M D, Macko D, Takahara Y (1970) Theory of Hierarchical Multilevel Systems. Academic Press, New York. Mintzberg H (1971) Managerial work: analysis from observation. Management Science, 18(2): B97–B110. Mintzberg H (1973) The Nature of Managerial Work. Prentice Hall, Englewood Cliffs, New Jersey. Monostori L, Valkenaerts P, Dolgui A, Panetto H, Brdys M, Csáji B C (2015) Cooperative control in production and logistics. Annual Reviews in Control, 39: 12–29. Nitzan S, Paroush J (1983) Small panels of experts in dichotomous choice situations. Decision Sciences, 14(3): 314–325. Nof S Y (2003) Design of effective e-work: review of models, tools, and emerging challenges. Production Planning and Control Special Issue on e-Work Models and Cases, 15 (8): 681– 703. Nof S Y (2007) Collaborative control theory for e-work, e-production and e-service. Annual Reviews in Control, 31(2): 281–292. Nof S Y, Ceroni J, Jeong W, Moghaddam M (2015) Revolutionizing Collaboration through e-Work, e-Business, and e-Service (Vol. 2), Springer. Nof S Y, Morel G, Monostori L, Molina A, Filip F (2006). From plant and logistic control to multienterprise. Milestones report of the manufacturing & logistic systems coordinating committee. Annual Reviews in Control, 30 (1): 55–68. Panetto H, Jardim-Goncalves R, Molina A (2012) Enterprise integration. Theory and practice. Annual Reviews in Control, 3(2)6: 284–290. References 29

Parasuraman R, Sheridan T B, Wickens C D (2000) A model for types and levels of human interactions with automation. IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, 30: 286–297. Parasuraman R, Sheridan T B, Wickens C D (2008) Situation awareness, mental workload and trust in automation. Viable, empirically-supported cognitive engineering constructs. Journal of Cognitive Engineering and Decision Making, 2(2): 140–160. Parasuraman R, Wickens C D (2008) Humans: still vital after all these years of automation. Human Factors. The Journal of Human Factors and Ergonomics Society, 50 (3): 511–520. Piramuthu S, Shaw M J (2009) Learning-enhanced adaptive DSS: a design science perspective. Information Technology and Management, 10(1): 41–54. Power D J (2015) Creating a data-driven global society. In: Lakshmi S I,Power D J eds. Reshaping Society through Analytics, Collaboration, and Decision Support. Role of Business Intelligence and Social Media. Springer International Publishing: p. 13–28. Rasmussen J (1983) Skills, roles and knowledge, signal, signs and symbols and other distinctions in human performance model. IEEE Trans. Systems, Man and Cybernetics, SMC, 13(3): 257– 266. Riabacke M, Danielson M, Ekberg L (2012) State-of-art prescriptive criteria weight elicitation. Advances in Decision Science, article ID: 276584, doi:10.1155/2012/276584. Sage A R, Cuppan C D (2001) On the system engineering of systems and federation of systems. Information Knowledge System Management, 2(4): 325–349. Save L, Feuerberg B (2012) Designing human-automation interactions: a new level of automation taxonomy. In: De Waard D, Brookhuis K, Dehais F, Wickert C, Röttger Manzey D, Biede S Reuzeau F, Terrier P eds. Human Factors: A View from Integrative Perspective. Proc. HFES Europe Chapter Conference, Toulouse: p. 43–55. Scholten B (2007) The Road to Integration. A Guide to Apply ISA–95 Standard in Manufacturing. ISA Research Triangle Park. Seok H, Nof S Y, Filip F G (2012) Sustainability decision support system based on collaborative control. Annual Reviews in Control, 36(1): 85–100. Sheridan T B, Verplank W (1978) Human and Computer Control of Undersea Teleoperators. Man-Machine Systems Laboratory, Dept. of Mechanical Engineering, MIT, Cambridge, MA. Sheridan T B (1992) Telerobotics. Automation and Human Supervisory Control. MIT Press. Simon, H (1955) A behavioural model of rational choice. The Quarterly Journal of Economics, LXIX (February): 99–118. Simon H (1956) Rational choice and the structure of environment. Psychological Review, 63(2): 129–138. Simon H (1957) Models of Man. John Wiley. Simon H (1960/1977) The New Science of Management Decisions. Harper & Row, New York (revised edition in Prentice Hall, Englewood Cliffs, N.J., 1977). Ştefănoiu D, Borne P, Popescu D, Filip F G, El Kamel A (2014). Optimization in Engineering Sciences: Metaheuristics, Statistic Methods and Decision Support. iSTE, John Wiley & Sons, London. Thoben K D, Busse M, Denkena B, Gausemeier J (2014) Editorial: System-integrated Intelligence–New Challenges for Product and Production Engineering in the Context of Industry 4.0. In: 2nd International Conference on System-Integrated Intelligence: Challenges for Product and Production Engineering, Procedia Technology, 15:1–4. Toffler A (1980) The Third Wave: The Classic Study of Tomorrow. Bantam, New York. Unver H O (2013) An ISA-95-based manufacturing intelligence system in support of lean initiatives. The International Journal of Advanced Manufacturing Technology, 65(5–8): 853-866. Van Brussel H, Valkenaers P, Wyns J (1997) HMS - holonic manufacturing systems test case (IMS Project). In: Kosanke, K., Neil, J. G., eds, Enterprise Engineering and Integration. Building International Consensus. Springer, Berlin: p. 284–292. Valckenaers P, Van Brussel H, Bongaerts L, Wyns J (1997. Holonic manufacturing systems. Integrated Computer-Aided Engineering, 4(3):191–201. 30 1 Collaboration and Decision-Making in Context

Van de Ven A H, Delbecq A L, Koening Jr R (1976) Determinants of coordination modes within organizations. American Sociological Review 41: 322–338. Wang K, Choi S H (2014) A holonic approach to flexible flow shop scheduling under stochastic processing times. Computers & Operations Research, 42:157–168. Wierzbicki A (2010) Group decisions and negotiations in the knowledge civilization era. In: Kilgour D M, Eden C, eds. Handbook of Group Decision and Negociation. Springer Science + Business Models, Dordrecht: p. 11–24. Zamfirescu C B, Pirvu B C, Schlick J, Zuehlke D (2013) Preliminary insides for an anthropocentric cyber-physical reference architecture of the smart factory. Studies in Informatics and Control, 22(3): 269–278. Zamfirescu C B, Pirvu B C, Gorecky D, Chakravarthy H (2014) Human-centred assembly: a case study for an anthropocentric cyber-physical system. In: 2nd International Conference on System-Integrated Intelligence: Challenges for Product and Production Engineering, Procedia Technology, 15:90–98. Zavadskas K E, Podvezko V (2016) Integrated determination of objective criteria weights in MCDM. International Journal of Information Technology & Decision Making 15(2): 267–283. Zavadskas E K, Turskis Z (2011) Multiple criteria decision making (MCDM) methods in economics: an overview. Technological and Economic Development of Economy, 17(2): 397–427. Zavadskas E K, Turskis Z, Kildiene S (2014) State-of-art surveys of overviews on MCDM/MADM methods. Technological and Economic Development of Economy, 20(1): 165–179. Zavadskas E K, Turskis Z, Ustinovichius L, Shevchenko G (2010) Attributes weights determining peculiarities in multiple attribute decision making method. Economics of Engineering Decisions 21(1):32–43. Zavadskas E K, Antucheviciene J, Turskis Z, Adeli H (2016) Hybrid multiple-criteria decision-making methods: A review of applications in engineering. Scientia Iranica, 23 (1):1–20. Zhong H, Levalle R R, Moghaddam M, Nof S Y (2015) Collaborative intelligence-definition and measured impacts on internetworked e-work. Management and Production Engineering Review, 6(1): 67–78. Zuehlke D. (2008) Smartfactory–from vision to reality in factory technologies. In: Proceedings of the 17th International Federation of Automatic Control (IFAC) World Congress, Seoul, South Korea:p.82–89). Zuehlke D (2010) Smartfactory. Towards a factory-of-things. Annual Reviews in Control, 34(1): 129–138. Chapter 2 Decision Support Systems

The chapter is meant to familiarize the reader with the general notions concerning a well-defined class of information systems, namely the decision support systems (DSS). The need for such systems was presented in the previous chapter, especially in Sects. 1.3 and 1.4. The chapter aims at answering several questions, such as: “In which decision situations and associated decision problems a DSS is necessary?”, “What features and technology solutions characterize the class of DSSs?”, “How the entities that compose the general DSS class can be classified?”, “How a DSS can be constructed and what decision problems can be faced during DSS design and deployment?” The rest of this chapter is organized as follows. Section 2.1 starts with the presentation of Herbert Simon’s process model of decision making. The characteristic features of decision making problems that make a DSS necessary are highlighted together with decision maker’s limits and constraints that should be relaxed by means of a computerized support. In the same section, decision makers are classified with respect to several criteria, such as: the composition of decision units, the decision powers of participants and so on. Section 2.2 contains the general definition of DSS adopted in this book and a presentation of several technology aspects, such as: (a) the knowledge-based framework of Bonczek, Holsapple, and Whinston, and (b) the particular subclasses of systems and tools. A special case, namely real-time DSS for control applications is eventually described. In Sect. 2.3, we present several classifications made from various per- spectives, such as: (a) number of users, (b) type of support, and (c) “dominant” technology. A brief presentation of the systems that combine numerical models with AI (Artificial Intelligence)-based technologies is made at the end of this section. In Sect. 2.4, we address the design and construction aspects as viewed from a decision-making perspective. Initially, the design and construction process is presented as an opportunity and a means to implement the change within the organization. Several design approaches are reviewed with particular emphasis on incremental, prototype-based one, together with the story of developing DISPATCHER®, a practical DSS family utilized in the continuous process indus- tries and related fields. The general criteria for software selection are eventually

© Springer International Publishing AG 2017 31 F.G. Filip et al., Computer-Supported Collaborative Decision-Making, Automation, Collaboration, & E-Services 4, DOI 10.1007/978-3-319-47221-8_2 32 2 Decision Support Systems presented together with a simple, collaborative decision making procedure based on measuring individual and group preference and polarization.

2.1 Decisions and Decision-Makers

Adefinition of decision was given in Sect. 1.4. One can speak about a decision problem when a situation that requires action shows up, there exist several possible courses of action called alternatives,andadecision unit (formed from one or several persons) is empowered to choose an alternative and is accountable for the results of decision implementation.

2.1.1 Herbert Simon’s Process Model of Decision-Making

Decision-making (DM) is a specific form of information processing that aims at setting-up an action plan under specific circumstances. Nobel Prize laureate, Herbert Simon (1960) views it as a process made up of three steps (Fig. 2.1)as follows: • Intelligence, which consists of activities such as: (a) setting the objectives, (b) data collection and analysis in order to recognize a decision problem, (c) problem statement; • Design, which includes activities such as: (a) identification (or designing) possible courses of action called alternatives, (c) model building, and (d) eval- uation of various potential solutions to the problem; • Choice, or selection of a feasible alternative, called decision, with a view to releasing it for implementation. Simon (1977) introduced later a fourth step which consists in implementation of the solution and review of the results.

Remark There is a subtle difference between decision-making and decision-taking though they are commonly used interchangeably. While in the former case, the final result of the process is the chosen alternative, in the later one, the selected solution is assumed and released for implementation by an empowered person or a group of persons who are accountable for the impact of the decision taken. From now on, in the rest of the book, we will assume that it is only the human who releases the decision for implementation, or acts himself. If a decision problem can be entirely clarified and all possible decision alter- natives can be fully explored and evaluated before a choice is made, then the problem is said to be completely structured, otherwise is said to be unstructured or semi-structured. A structured problem occurs when the state of affairs is rather stable, the similar situations have been met in there is the past, there is no time pressure to take a highly important decision, and there are “ready made” available 2.1 Decisions and Decision-Makers 33

Fig. 2.1 Simon’s process model of decision making

solutions. In this case, the decision is said to be programmable. In case of unstructured problems, no similar situations were met in the past, the available information are scarce, the consequences of a wrong decision are very serious, the time is critical. In such situations, “custom-made”, non programmable decisions are to be made (Soelberg 1967). In Table 2.1, the attributes of completely structured and totally unstructured problems are presented.

Table 2.1 Structured and unstructured problems Novelty? Information Urgency? High Programmable sufficiency? importance? decisions? Completely No Yes No No Yes structured Totally Yes No Yes Yes No unstructured 34 2 Decision Support Systems

If the problem is completely structured, an automation tool could provide a solution without any human intervention and, consequently, the task of selecting the course of action can be allocated to it. On the other hand, if the problem has no structure at all, only human inspiration can help. If the problem is semi-structured a computer-support for decision-making can be envisaged. The problem structuredness level depends on several objective factors such as: (a) the characteristic features of the decision situation itself and (b) the level of decision power on which the decision unit is placed. It also may depend on sub- jective factors including the time limits, constraints and, even, temporary mood of the human decision-maker

2.1.2 Limits and Constrains of Human Decision Makers

The work performances of the decision-maker are influenced by several limits and constraints. Though the level of influence depends on the characteristic features of each individual decision-maker and his/her decision context, several classes of limits and constraints can be identified (Holsapple and Whinston 1996; Filip 2007) as follows: • Cognitive limits concern both the quality of decision problem data available and decision procedures and techniques, methods and techniques mastered by the human decision-maker; • Costs of assistants or external consultants that are possibly called to support the work of the decision-maker. Also the increased dependence of the solutions chosen on the quality of services provided by assistants may be viewed as a limit; • Temporal constraints that must be observed when urgent decisions are to be made in time-critical situations or several decision problems are to be solved at the same time by the same person or group of persons; • Communication or/and collaboration limits and constraints that show up when several persons who possess various backgrounds, knowledge bodies and intentions are involved in making a decision or/and implementing a chosen solution; • Low trust of human decision-takers in the solutions recommended by comput- erized decision methods or/and the cost of the associated IT (software and hardware) products.

2.1.3 Classes of Decision-Makers

The term decision unit is a generic one. It may denote either one individual deci- sion-maker (a role or a person) or an entity which is composed of several partici- pants, sometimes called group or multi-participant decision-maker (Holsapple and 2.1 Decisions and Decision-Makers 35

Table 2.2 Decision-maker subclasses Attribute subclass Number of Stable Equal Cooperating? Human participants? composition? powers? support team? Individual 1 NA NA NA NA Unilateral 1 NA NA NA Yes Group of peers 2+ Yes Yes YES Yes/No Collectivity 2+ No Yes Yes/No No/Yes Hierarchical-organizational 2+ Yes No Yes Yes/No team

Whinston 1996). Several specific subclasses of decision units are presented in Table 2.2, where 2+ means two or more participants, and NA stands for non applicable. The attributes that characterize subclasses are: (a) the number of members of the unit; (b) the participants’ decision powers, (c) the stability (over a reasonably long period of time) of the unit composition, and (d) objectives pursuit (cooperation or competition) by participants, and (e) the existence of a support team. Each person that takes decisions may be supported in performing his/her activities by a team of assistants or external consultants who either are familiar with the problem domain (it is said that they possess What-type knowledge) or master decision methods, procedures and associated IT tools (called How-type knowledge). According to Holsapple and Whinston (1996) they together form a hierarchical team or a Human Support System (HSS). The typical functions of a HSS are: • receiving and accepting decision-maker’s requests for information (such as: problem data, results of an analysis, clarification or explanation of a response previously received, helps to formulate a question) or commands to acquire new information from various sources: • issuing outputs, that can represent feedbacks to decision-maker’s requests or unsolicited and proactive messages when information analysis performed indi- cates decision situations or undesirable behaviors of the decision-maker: • maintaining and processing its own knowledge base. Though the team members collaborate with a view to recommending a solution, they are not empowered and accountable for releasing it for execution. This task is to be carried-out by another person (the unilateral decision-maker), or a group of peers, or a hierarchically organized committee.

2.2 DSS Basic Concepts

The DSS appeared as a term in the early 70s, together with the systems meant for supporting managerial decisions. One may view that the DSS concept was antici- pated by the idealized vision of Licklider (Licklider 1960) over the “precognitive” 36 2 Decision Support Systems man-computer systems, which were meant to “[…] enable man and computers to co-operate on making decisions and control complex situations […]”. According to McCosh (2002), the term was first articulated by Scott-Morton (1967) in a seminar held in February 1964. As several other new concepts, DSS was initially received with great enthusiasm by a part of academia community and industry people. For example, Wagner (1981), a respected pioneer in the DSS field, hailed “the new school of thought called DSS”. At the same time, the term itself and especially its usage were controversial. For example, Naylor (1982) claimed that DSS was a “redundant term meant to describe a subsystem of MIS (Management Information System)” and it was not based on a conceptual framework. Few years later, Bonczek et al. (1984), important contributors to the progress of the DSS movement, noticed that the term was “abusively used as a new label placed on various software products, in order to obtain a competitive advantage”. Since then, research and development activities and many successful practical applications (Eom 2002, 2003; Kim and Eom 2004; Filip et al. 2014) have witnessed that the DSS concept definitely meets a real need and there is an ever broader application field for it as predicted Vazsonyi (1982). This development was stimulated by the change that can be noticed in the proficiency of all potential users in using the new technology. As Shim et al. (2002) noticed:

Managers and knowledge workers in the late 1980s and 1990s are different from earlier DSS users […] and the roadblocks of the 1980s and 1990s for using IT in executive decision making are being removed. In fact, IT is now viewed as a strategic tool that is central to the pursuit of competitive advantage. Therefore, various DSS technologies will be more accepted throughout the enterprise, from operational support to executive boardrooms. Good historical accounts on the DSS domain developments can be found in (Alter 2004; Shim et al. 2002; Power 2002b, 2008a; Hosack et al. 2012).

2.2.1 Definition and Characteristic Features

In Filip (2008), a DSS is viewed as

An anthropocentric and evolving information system which is meant to implement the functions of a human support system that would otherwise be necessary to help the decision-maker to overcome his/her limits and constraints he/she may encounter when trying to solve complex and complicated decision problems that count. The main characteristic features of a DSS are presented in a concise manner in Table 2.3. 2.2 DSS Basic Concepts 37

Table 2.3 DSS characteristics

Name: Decision Support System

Mission to relaxe the limits and constraints of the human decision-maker in making and taking a decision

Attributes Users: decision-makers, their assistants, hi red consultants other knowledge workers

Qualities: anthropocentric, adaptive to end-user, evolving over the time

Stored knowledge

-classes: descriptive (about the decision problem/domain), procedural (solvers), reasoning (about combining system functions), communication ( to support meessage exchanging)

- sources: initial design, end-user inputs, acquired from third parties or internally created

Functions: computerized version of functions of the Human Support Systems: - receiving and accepting decision-maker’s requests - issuing outputs to decision –maker and requests and orders to third parties - maintaining and processing its own knowledge

2.2.2 DSS Technology

2.2.2.1 Particular Subclasses

The general class of DSS can be particularized as two subclasses: (a) application oriented specific DSSs (SDSS) and (b) DSS tools (DSST). The entities of the former subclass are used by particular decision-makers (end-users) to carry-out their 38 2 Decision Support Systems

Fig. 2.2 DSS subclasses. DSS Decision support system; SDSS specific (application) DSS; DSSBCT DSS basic construction tool; DSSG DSS generator; GDSS group DSS; IITT integrated IT tool; ITT IT tool; SITT specific IT tool; GPITP general-purpose IT product; GPK General– purpose knowledge; SK specific knowledge; ASK Application-specific knowledge; DSK Domain specific knowledge; 1…*= “one or more”

specific tasks in a well-defined organizational and technical setting. Consequently, the systems must possess application specific knowledge (ASK). The objects of the later subclass are used by system builders to construct the application systems. They can be basic construction tools (DSSBCT) and integrated tools (Fig. 2.2). The DSS basic construction tools can be general-purpose or specialized IT products (GPITP and SITP). The first category covers hardware facilities such as PCs, workstations, communication equipment, or software components such as operating systems, compilers, editors, database management systems (DBMS), optimization libraries, GIS (Geographical Information Systems) modules and so on. Specialized technologies are hardware and software tools such as sensors, 2.2 DSS Basic Concepts 39 simulators, computerized decision analysis suites, model base management systems (MBMS), expert system shells, other AI (Artificial Intelligence)-based software modules that have been created for building new application oriented specific DSSs or for improving the performances of the existing systems. A special subclass of tools is composed of DSS generators (DSSG), which are prefabricated integrated systems, oriented towards well defined application domains and functions. The generators can be quickly adapted to particular instances of the same the application domain provided they are properly customized for the application characteristics and for the end-user’s specific needs (Sprague 1980, 1987). As noticed in (Bhargava et al. 2007) “application -specific DSS are far easier to build [starting from a DSSG], but rarely reusable; DSS generators are far more complex to build, but can be adapted to build many specific systems”. An application oriented specific DSS can be developed from either a system generator (possibly using additional tools) to save time or directly from the basic construction tools to optimize its performances in particular the flexibility of the solution. While some tools can be software pieces which are bought and installed at application site, other can be hired and paid-per-use.

2.2.2.2 Architecture

Bonczek et al. (1981) proposed a DSS generic framework, called here BHW model, which is quite general and can accommodate the recent technologies and archi- tectural variations. It is, consequently, adopted in this book and reviewed in this section. The BMW model is based on four essential components: (a) language subsystem, (b) presentation subsystem, (c) knowledge subsystem, and (d) problem processing subsystem (Holsapple 2008). The Language Subsystem (LS) is the set of all communication means through which the end-user can transmit local or remote (via networks) messages containing problem data, requests for analysis or explanations in a form which is under- standable and acceptable by the system. The LS can be also utilized by other people or automation devices that play the roles of data feeders or decision implementers for sending reports upon receiving requests via DSS or on their own initiatives. The Presentation Subsystem (PS) is the set of means utilized by the DSS to send local or remote output messages (such as requests for data to be input by end-users themselves or data feeders, results of information analysis and solution selection and justification, decisions released for implementation or actions to be automatically executed) to the end-user or other people or actuators. The LS and the PS compose together the communication subsystem (CS). The Knowledge Subsystem (KS) contains the pieces of knowledge (K) that were input into the system from various sources (system designers, users, data feeders) or were newly created within the DSS. There are several types of primary knowledge such as: 40 2 Decision Support Systems

• descriptive knowledge: past, present or forecast state variables of the controlled object and its environment; • procedural knowledge (or models): optimization and simulation models; • reasoning (or meta) knowledge: rules about how to use procedural knowledge together with descriptive knowledge. • The set of secondary knowledge is composed of: • linguistic knowledge: application vocabulary, grammar rules which are needed by the system to understand user’s requests or data feeders’ reports; • presentation knowledge to describe the manner the information is sent to the users or to decision executers; • assimilative knowledge to be utilized in adding new pieces of knowledge or discarding the obsolete knowledge items or reorganizing the existing knowledge body within the knowledge subsystem. The Problem Processing Subsystem (PPS) contains the set of pieces of soft- ware meant to process the knowledge stored within the knowledge subsystem in accordance with the requests made by the user. The set of functions performed by the PPS include knowledge acquisition, selection, analysis, derivation, and pre- sentation. More concretely, the PPS implements Information analysis and Decision selection functions allocated to the system (see Sect. 1.3.3) to provide answers to questions as: • What happens with the controlled object? (perceiving a decision situation); • Which are the causes for such a situation? (diagnosing the situation); • How and why to act? (recommending and justifying a solution); • What [could happen] if this decision [recommended by the system or input by the human] would be implemented? (simulating the impact of the solution). Holsapple and Whinston (1996) showed that the BHW model can accommodate, as a particular case, the largely utilized D/IDM (Dialogue/Interface, Data, and Models) paradigm (Sprague 1980; Sprague and Carlson 1982; Ariav and Ginzberg 1985) (Fig. 2.3).

2.2.3 A Special Case: Real-Time DSS for Control Applications

Though, at the beginning of the DSS movement, many people viewed the system as a tool to be used only in business applications, at present, there are numerous applications meant to support the solving of real-time control problems (Filip et al. 1983, 2002; Power 2002a, c, 2011; Van der Walle and Turoff 2008; Turoff et al. 2011). Even in the early years of the domain, there were respected voices who recognized, that the DSS class include application systems for time-critical process control settings. For example, Bosman (1987) noticed that control problems could be looked upon as a ‘‘natural extension’’ and as a ‘‘distinct element’’ of planning 2.2 DSS Basic Concepts 41

Fig. 2.3 D/IDM model as a particular case of BHW model (adapted from Holsapple and Whinston 1996) decision-making processes. In (Sprague 1987) a DSS is meant to support com- munication, supervisory, monitoring, and alarming functions beside the traditional phases of the problem solving process (see Sect. 2.1.1). Chaturvedi et al. (1993) stated that real-time decision-making activities for control applications regard continuous monitoring of a dynamic environment, are short time horizon oriented, carried out on a repetitive basis, and must solve problems under time pressure. It is unlikely that an econological approach (see Sect. 1.5.2.), which is based on optimization, be technically possible for a large number of genuine real-time decision-making. At the other extreme, fully automated systems (see Sect. 1.3.2), corresponding to the higher levels of automation of the classification of Sheridan and Verplank (1978) cannot be accepted either, but in some rare cases. At the same time, one can notice that genuine real time decision-making activities can come, in most cases, across in crisis situations. For example, if a processing unit in a manufacturing plant must be shut down due to an unexpected equipment failure, the entire production schedule might turn obsolete. The rea- sonable decision is to take first adequate compensation measures to ‘‘manage the crisis’’ al least over the time interval which is necessary to re-compute a new production schedule or update the current one. In this case, a satisficing decision (see Sect. 1.5.2) may be appropriate. In case the crisis situation has been previously faced and successfully surpassed, an almost rule-based behavior (see Sect. 1.3.2) based on past decisions stored in the system is an acceptable reactive approach. As a proactive measure, the mini- mization of the probability of occurrences of crisis situations should be considered as one of the inputs (expressed as a set of constraints or/and objectives of the model) in the scheduling problem. For example, in a pulp and paper mill or in an oil refinery plant, a stop of a processing unit may cause draining the downstream tanks which are fed by the unit plant and, at the same time, an overflow might be noticed 42 2 Decision Support Systems in the upstream from which the unit plant is fed. Consequently, this situation implies shutting or slowing down the unit plants that are fed from up the down- stream tank and so on. Re-starting up the unit normally causes transitory regimes that determine serious undesired variations in the quality of the product. To prevent such situations, the schedule made up of the sequences of production rates of processing units should be set so that stock levels in tanks could compensate to as large extent as possible for stops or significant slowing down (Filip 2008; Filip et al. 2002; Filip and Leiviskä 2009). Beside process control, another typical application domain of real-time DSS is disaster prevention and crises management (Buraga et al. 2007; Gowri et al. 2016). From the above aspects, it results that one should add another specific necessary feature for the particular subclass of DSS to be utilized in real-time control appli- cations (Filip 1995, 2008):

An effective real-time DSS should support decisions on the preparation of good and cau- tious schedules as well as ad hoc pure RT decisions to solve crisis situations.

2.3 DSS Subclasses

As Alter (1977) remarked, in the early years of the DSS movement, “the decision support systems do not represent a homogenous category [of information systems] and one cannot speak about them in general terms”. Alter compared such a wrong approach to that when “somebody speaks about pets in general without making the differences between dogs and cats or between piranha fishes and turtles”. Starting from the above remark, Alter proposed taxonomy based on generic operations the system could support. The taxonomy included two main subclasses, namely data oriented systems and model oriented systems, which could be further decomposed into three and four, respectively, even more particular subclasses (Alter 1980). The taxonomy was meant to be used to facilitate the communication and to serve as a guideline for design and deployment of application oriented DSS. Since Alter’s proposal, a series of other classification schemes have been pro- posed based on such criteria as: user-type, level of generality (also called tech- nology level), level of normativity, and the dominant subsystem and corresponding technology (Hackathorn and Keen 1981; Holsapple and Whinston 1996; Power 2002a). In Sect. 2.2.2, we presented a decomposition based on technology levels (Sprague 1980) of the DSS general class into two particular subclasses, namely application-oriented specific systems and DSS tools. Three additional classification schemes will be presented in the sequel, with a view to proposing a terminology to be used in the rest of the book. 2.3 DSS Subclasses 43

2.3.1 Classification 1 (with Respect to Decision Maker Type)

In Sect. 2.1.3, the decision-makers were classified in accordance with a series of criteria, such as: the number of participants who take decisions, the stability of the decision unit composition and the decision position power of its members and so on. It goes without saying that a DSS must be designed to be compatible with the manner decisions are made and taken. In this respect, Hackathorn and Keen (1981) proposed structuring the general DSS class as three more particular subclasses as follows: 1. Individual (or personal) decision support systems (IndDSS), which are used by an individual (a role or a person) to carry-out his/her task. A special case of IndDSS is the Executive Information/Support System (EI/SS), which can be viewed as a natural evolution of MIS (Management Information System). The EI/SS can offer answers to ad hoc queries of executives and members of other staff (Watson et al. 1991). 2. Group Decision Support Systems (GDSS) belong to the class group support systems (GSS) or electronic meeting systems (EMS) (see Sect. 3.1). They were introduced and studied by Gallupe (1986), De Sanctis and Gallupe (1987), Gray (1987), Nunamaker et al. (1991), Gray and Nunamaker (1993), De Michelis (1986). GDSS are meant to support several individuals with similar power-positions who have to collaborate at certain time moments to make and take co-decisions (Shim et al. 2002; Gray et al. 2011; Zaraté 2013). GDSS are sometimes called multiparticipant DSS (Holsapple 1991; Holsapple and Whiston 1996; Marakas 2003). At present, communication capabilities, web technology and social networks play an essential role in GDSS to enable an ‘‘any-time, any-place’’ operation mode of the system. The following chapters will address the GDSS from various perspectives such as: concepts and methods, enabling technologies, specific features, and applications. The users of GDSS may represent the interests of various departments units of the organization. They may have different “local” priorities and possess different knowledge bodies and influence powers. 3. Organizational DSS (ODSS) (George 1991; Kivijärvi 1997)orintra-organiza- tional DSS (Power 2001), which are meant to support those decision units composed of members that are placed on various levels of power within an organization. In case such systems are meant to support co-decisions made by people situated in different networked collaborating organizations (see Sect. 1.1), one speaks about inter-organizational DSS (Power 2001; Eom 2005). According to Nunamaker et al. (1991), ODSS, GDSS, and negotiation systems are specific subclasses of a more general superclass named multiparticipant DSS. The operation of the multiparticipant DSS may follow the models of management and control schemes described in Sect. 1.2, ranging from pure hierarchical schemes (in the ODSS case) to genuine collaborative work (in the case of GDSS). 44 2 Decision Support Systems

2.3.2 Classification 2 (with Respect to Type of Support)

The next criterion of structuring the general DSS class as a set of more particular subclasses is the level of normativity of decision selection offered by the system. One can distinguish several types of support. 1. Passive assistance (PA): the system is supporting the end user to perform simple data/information analysis functions (see Sect. 1.3.2) in a comfortable and effi- cient manner that he/she could otherwise made manually in order to get the answers to questions, such as: “What is happening?” (see Sect. 2.2.2). 2. Traditional support (TS): the system mainly provides answers to questions like: “What if a certain course of action, possibly chosen by the user, would be applied?” 3. Prescriptive Support (PS): the system behaves as a computerized decision councilor that uses mathematical models and AI (Artificial Intelligence)-based modules to recommend a solution when the decision maker inputs his/her data into DSS. The modern Recommender systems (Ricci et al. 2011; Kaklauskas 2015) can be placed within the class of systems that offer prescriptive support. 4. Collaborative support (CS): the system enables the collaboration between the human and DSS or among various participants to decision making. In the former case, the system stimulates the user to introduce his/her solutions, then evaluates and, if necessary, refines them. It may also allow the end user to modify the solution automatically calculated (Filip et al. 1992; Filip and Leiviskä 2009). In the latter case, the system simulates the behavior of mediators (also called facilitators in GDSS). The modern advisory systems (Beemer and Gregg 2008; Kaklauskas 2015) may also be viewed as ones that offer a collaborative support. 5. Proactive support (PS): the system behaves as a knowledgeable and informed consultant who guides the human decision-maker in carrying-out his/her task and possibly stimulates him/her to adopt new working styles. The systems of the DIPATCHER family which will be described in Sect. 2.4 belong to PS subclass.

2.3.3 Classification 3 (with Respect to the Technological Orientation)

Alter’s(1977, 1980) taxonomy was based on the extent the decision selection was influenced by the messages issued by the system. More than two decades later, Power (2001, 2002a) extended Alter’s taxonomy by associating the main generic operations with so-called dominant technologies used. He identifies several DSS generic types as follows. 1. Data-driven DSS (DadDSS): the systems include, behind the old file-based technologies (file drawers in Alter’s taxonomy), several new ones, such as: 2.3 DSS Subclasses 45

Data Warehouse, Online Analytical Processing (OLAP) (Power 2008b; Bhargava et al. 2007). Spatial DSS, which are based on mathematical models combined with GIS (Geographical Information Systems), may be viewed in this subclass too. 2. Model-driven DSS (MdDSS): the systems are characterized by the predomi- nance of numerical models that are used for evaluating through simulation the impact of possible alternatives, performing risk analysis, solving mono/multi-objective optimization problems to suggest a solution (Power and Sharda 2007). 3. Knowledge-driven DSS (KdDSS): which extend the meaning of suggestion system subclass of Alter’s taxonomy. The predominant technology of KDDSS, sometimes called intelligent DSS, is the knowledge management (Liu et al. 2014). The subject will be addressed later in this chapter (see Sect. 2.3.4.2). 4. Communication-driven DSS (CdDSS): the systems are mainly meant to support exchange of data and information for collaborative/cooperative decision-making. Web-based DSS (Power and Karpathi 2002) can be placed in the CDDSS. 5. Document-driven DSS (DodDSS); such systems may be viewed as a special case of DaDDSS. In contrast with DaDDSS, which supports analysis of struc- tured data, a DoDDSS is meant for unstructured data: web pages, multimedia documents and so on (Bhargava et al. 2007). Business intelligence and ana- lytics-based systems (Hribar 2010; Power 2014; Kaklauskas 2015) fall within the subclass of DoDDSS (See Sect. 4.1). From a historical perspective, Powers and Phillips-Wren (2011) identify seven evolution stages of DSS):

DSS 1.0 were built using timesharing systems; DSS 2.0 were built using minicomputers; DSS 3.0 were built using personal computers and tools like Visicalc, Lotus and Excel; DSS 4.0 were built using DB2 and 4th generation languages; DSS 5.0 were built using a client/server technology on LANs; DSS 6.0 were built using large scale data warehouses with OLAP servers; DSS 7.0 were built using Web technologies. The above authors wander whether Web 2.0 technologies have evolved enough to speak about on DSS 8.0?” The subject will be treated in Sect. 4.2.

2.3.4 Special Cases

The particular subclasses identified in accordance with the number of participant users (see Sect. 2.3.1) and dominant technology (see Sect. 2.3.3) may necessitate further nuancing. This will be made in this section. 46 2 Decision Support Systems

2.3.4.1 Group Supporting Systems Versus Negotiation Support Systems

In Sect. 2.1.1, the several particular subclasses of decision units were identified in accordance with various criteria, such as the number of participants and the existence of a common goal. A distinction was made between individual and multiparticipant decision units. One can also notice that while a group was characterized by a set of relevant common objectives and a certain stability in time of its composition, the members of a decision collectivity may or may not share a common goal and, in addition, and they take part in a certain activity only on an episodic base. In Sect. 2.3.1, we made a classification of DSS with respect to the numbers of participants. The subclass of group (or multiparticipant) DSS was defined for the case when there are more than one people who use the system. Final users (the decision-takers), pursue a common goal (though they might have also several local secondary objectives), or may be in an obvious competition and even in conflict. We speak, in the former case, about group decision making and, in the later one, about a negotiation meant to lead collaborative decisions that are acceptable for all parties involved in the process. Klingour and Eden (2010) state that in both cases one may speak about collaborative and interactive activities of individuals meant to reach a collective decision. As Klingour and Eden (2010) point out, “the field of group decision and negotiation exhibits both unity and diversity”. The above authors accept various viewpoints about group decision and suggestion, such as: • “The group decisions and negotiations cannot be easily detangled. There is only a minor differentiation which consists in viewing the negotiations carried out as soft social and psychological group decision activities. • In contrast with group decision, where the decision problem is shared by more than one party who must collectively find a solution for which participants bear their specific responsibility, in group negotiation, the concerned parties may reach a collective solution or make no choice at all”. In the remaining part of the book, we will assume that the multiparticipant DSS will support the activities both in group decision and negotiation processes and highlight the differences, whenever it is necessary.

2.3.4.2 Mixed Knowledge Systems

As discussed in Sect. 1.5.2, practical field experience has shown that, in many cases, the problems are either too complex for a rigorous mathematical formulation, or too costly (with respect to time or/and money) to be solved by using available comput- erized numerical optimization models. As stated in that section, an optimization-based approach assumes an econological (economically-logical) model of the decision-making process, but, in real-life, other models of decision-making 2.3 DSS Subclasses 47 activities, such as bounded rationality or implicit favorite may be applicable to describe the actual manner the decisions are usually made and taken. Therefore, several alternative technologies based on artificial intelligence have been taken in consideration to be combined with numerical models and utilized in DSS (Dutta 1996; Dhar and Stein 1997; Filip et al. 1992; Tecuci et al. 2007). Four decades ago, Simon (1987) anticipated the combination of numerical models with the AI-based methods and tools:

The MS/OR profession has, in a single generation, grown from birth to a lively adulthood and is playing an important role in the management of our private and public institutions. This success should raise our aspirations. We should aspire to increase the impact of MS/OR by incorporating the AI kit of tools that can be applied to ill-structured, knowledge-rich, nonquantitative decision domains that characterize the work of top man- agement and that characterize the great policy decisions that face our society The term Artificial intelligence (AI) is commonly used to indicate a branch of computer science aiming at making an artifact reason in a way which is similar to human manner of reasoning. Beside the classical technologies used in DSS such as computerized numerical models and data base management systems, there are several AI-based technologies (Feigenbaum 1977; Pomerol 1997; Filip and Barbat 1999; Cohen and Feigenbaum 2014; Akerkar 2014) that are usable and effectively used in DSS design The Expert systems (ES), which were defined by E. Feigenbaum as ‘‘intelligent computer programs that use knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise for their solution” (Akerkar 2014, p. 179). As in the case of the DSS, one can identify several par- ticular subclasses of expert systems such as: (a) application ES, also called Knowledge Based Systems (KBS), that are systems containing problem specific knowledge, (b) system shells, that are prefabricated systems, valid for one or more problem types to support a straightforward knowledge acquisition and storage, (c) basic tools such as the specialized programming languages. The similarities of the ES and DSS as the later are presented in Sect. 2.2.1 above are obvious. There are also similarities from an application-based perspective. Quite many problem types such as prediction, simulation, planning and control have been reported to be solved by using both expert systems and traditional DSS. One may view, from a DSS perspective, that expert systems are software tools to be used as components in DSS. At the same time, one from the ES side could state that DSS is merely a term denoting an application of a KBS. Even though those views might be easily explained by the backgrounds of system builders and commercial interests of IT product and service providers and consultant firms, there is indeed a fuzzy border between the two concepts, DSS and ES. However, a thorough analysis can reveal several differences between typical ESs and typical DSSs (Ford 1985). The most important ones are such as: • the application domain is well-focused in the case of ES and is rather vague, variable, and, sometimes, even unpredictable in the case of the DSS; 48 2 Decision Support Systems

• the information technology used is mainly based on symbolic computation in the ES case and is still heavily dependent on numerical models and databases in traditional DSS cases; • the user’s initiative and attitude towards the system are more creative and free in the DSs case in contrast with ES case, when the solution may be simply accepted or rejected (Filip 2005, 2008). Case-Based Reasoning (CBR) systems’ central idea consists in re-using solu- tions already adopted and proved to be effective in solving previously encountered problems that are similar to current decision problems. While standard expert system functioning is based on deductive processes, a CBR’s one is based on induction. The operation of CBR systems includes the first or all the three following phases: • selecting, from a case base, one (or several) decision situations which is found to be similar to the current one by using an adequate similarity measure criterion; • adaptation of the selected case[s] to characteristic features of the problem to solve (this operation is performed by an expert system which uses to perform the reasoning on differences between the problems); • storing and automatically indexing the just processed case for further learning and later use. Artificial Neural Networks (ANN), also called connectionist systems. They are used for highly unstructured hard problems. Their operation is based on two fun- damental concepts:(a) the parallel operation of several independent learning information processing units, and (b) the law which enables processors adaptation to current information environment. As Monostori and Barschdorff (1992) remarked, expert systems and ANNs differ mainly with respect to the manner they store the knowledge. While this is a rather explicit (mainly as rules or frames) and understandable manner in the former case and implicit (as weights, thresholds) manner and hardly comprehensible by the human in case in the later one. This difference is also reflected in knowledge acquisition and modification. While knowledge acquisition is a rather complex process in case of ES and is simpler in case of ANN, the knowledge modification is relatively straightforward in case of ES but might require re-training from the very beginning in case a new element is added to ANN. An application of ANN in workshop control is given by Nicoară et al. (2011) Agent Technology got traction since mid-1990s (Dzitac and Bărbat 2009). Franklin and Graesser (1996)define an autonomous agent as an artifact “situated within and part of an environment that senses that environment and acts on it, over time, and pursuits its own agenda and so as to effect what it senses over future”. The main attributes of an intelligent agent are: (a) reactivity, (b) autonomy, (c) learning, (d) cooperation, (e) reasoning, (f) communication, and (g) mobility. It autono- mously interact with its environment and makes informed decisions based on its perceptions. There are many relevant applications of agent technology in the field of medical diagnosis and prescription and multiparticipant decision units (Tecuci 2.3 DSS Subclasses 49

1998; Bowman et al. 2001; Zamfirescu 2003; Darren et al. 2005; Tecuci et al. 2007). Tecuci et al. (2016) developed a methodology and a corresponding set of cog- nitive agents that form a special subclass called Disciple. The agents are meant to play the role of intelligent assistants that use the evidence-based reasoning. They “learn the expertise of problem solving directly from human experts, support experts and non-experts in problem solving and decision-making and teach their expertise to students”. The users, who may not be computer specialists but possess the What-type (application) knowledge, and the How-type (problem solving method) knowledge, teach the agents by means of examples and explanations and supervise and correct their behavior. Young (1983) noticed that while some decision support systems are oriented towards the left hemisphere of the human brain, some others are oriented towards the right hemisphere. While in the former case, the quantitative and computational aspects prevail, in the later one, pattern recognition and the analogy-based rea- soning are resorted to. Based on this observation, there is a significant trend towards combining the numerical models and the models that emulate the human reasoning to build advanced DSS. Since the beginning of the DSS movement, numerical optimization and simu- lation models have been, beside datasets and database management systems, the essential characteristic constituents of DSS. Their main advantages are: compact- ness of data representation, computational efficiency (provided the model is cor- rectly formulated) and the availability of a quite large number of computerized algorithms organized as high quality software libraries (see for example NEOS Optimization Guide to be found at: http://www.neos-guide.org/Optimization-Guide ). On the other hand, numerical models may present a number of disadvantages. They are the result of intellectual processes of abstraction and idealization. Consequently the models can be applied to problems that are characterized by a certain structure. However this is hardly the case in quite many real-life problems. In addition, the use of numerical models assumes that the user possesses certain skills to formulate and experiment the numerical models (Dutta 1996). To overcome the above problems, a solution is to view the AI-based tech- nologies as possible complements to numerical models. The idea has been around for several decades (Turban and Watkins 1986). Various terms such as ‘‘tandem systems’’, ‘‘expert DSS-XDSS’’, “mixed knowledge DSS)” were proposed to name the systems that combine numerical models with AI based tools. A possible tech- nology assignment of various technologies to decision making activities is given in Table 2.4 (Filip 2008). An extended view to create Integrated DSS is proposed by Liu et al. (2010). The authors present a multiperspective of integration of the well-known components of the D/IDM framework (see Sect. 2.2.2) with modern technologies as follows: • Information integration perspective: the data component is expanded to include the technologies for retrieval of data, information, and knowledge from multiple sources such as data warehouses. This can be further extended by unstructured 50 2 Decision Support Systems

Table 2.4 A possible ask assignment in DSS Activities EU NU NM ES CBR ANN IA BI&A Intelligence Setting objectives E M I/M Perception of the decision I P P/E P situation Problem recognition and I P M/I P classification Design Selecting the approach E M Selecting the model type I P Model building P P P/I Model validation I M Identifying/designing PPP I alternatives Choice Experimenting models Model solving E P P Result interpreting I M M/E Sensitivity analysis I M I Decision selection I I M/I P I Release for EE implementation EU Expert user, NU Novice user, NM Numerical model, ES Rule based expert system, ANN Artificial neural network, CBR Case based reasoning, IA Intelligent agent, P Possible, M Moderate, I Intensive, E Essential, BI Business intelligence and analytics

text bases, and web content, GIS (geographical information systems) to obtain Spatial DSS. Corresponding processing modules for data and text analysis such as: OLAP (On-line Analytic Processing), data mining and text mining are, accordingly, added. • Model integration: the model component is expanded to include modern busi- ness models such as: ERP (Enterprise Resource Planning), SCM (Supply Chain Management) and CRM (Customer Relationship Management). Qualitative models and the corresponding processing modules to provide decision sug- gestions and human user guiding are included. • Presentation (Dialogue) integration by allowing combined modes and styles of man-machine communication 2.4 DSS Construction 51

2.4 DSS Construction

The I&TC (information and communication technology) vendors continuously release to the market ever more modern hardware and software products and pro- vide exhaustive services for project management in the information system development. In addition, new business models and technology trends are gaining traction on the market such as: increased usage of Internet, business intelligence and analytics, and viewing I&CT/Software as a Service in conjunction with Cloud Computing. During the process of designing and implementing any information systems, one can face sequences of decisions which should be made, at certain time moments, with respect to the choice of the most adequate alternative concerning several critical aspects, such as: system orientation, composition of the team of the persons involved in design, method to be adopted, I&CT tools to be utilized, resources to be allocated and so on. There are various aspects, both of the technical and non-technical nature, which should be taken into consideration. Among the main aspects which might make the decisions of the designer difficult are: (a) the diversification of the possible technical constituents, and (b) the requirements for the solution quality which are set by the users who are more and more informed and have to face an ever more fierce competition. To choose the appropriate solutions for the decision situations which can be encountered in the process, an approach based multi-attribute decision models (MADM) could be effective. In this section, we will use a decision-making perspective to survey several methodological and practical aspects of designing effective, usable, useful, and actually utilized DSS following and updating the aspects presented in Filip 2011, 2012; Borne et al. 2013, Chap. 7.

2.4.1 Influence Factors

There are several factors which can influence the process of designing and implementing an information system, such as: (a) the participants to the process, (b) the orientation and the purpose of the system, (c) the organizational setting, (d) standards to be utilized, and so on. They should be taken into consideration by the management team of the target enterprise and the designer as well, when a decision on creating and installing a DSS is to be made.

2.4.1.1 People

The people involved in the DSS design and implementation should cooperate closely as a team to obtain a good solution for the allocated resources (time, manpower, money). They all should take part and contribute in various extents to 52 2 Decision Support Systems the process for the first moment of discussing the idea of DSS until its “steady state” operation and impact evaluation. The following generic classes could be identified: (a)”clients”, (b) designers, and (c) I&CT vendors and service providers. The members of the “client” class include the project “sponsor” (a manager), and the project “champion”, who represents the interests of the future direct (“hands-on”) or indirect (“beneficiary”) actual users (who may be also participate to the design and implementation of the system). The “project champion” possesses the necessary knowledge of the application domain (the so called What-type knowledge). The “project sponsor” possesses the authority since he/she is empowered to represent the interests of the organization and, con- sequently, has the rights to accept or reject the project solutions and to decide on further allocating the necessary resources to continue the project. The designers can be members of a I&CT department of the target organization or/and a team of analysts and programmers of a consultancy firm. They master the design techniques (the How-type knowledge) and are familiar with the I&CT products available on the market. The I&CT manufacturers and vendors, and other service providers can adapt and alter the I&CT products and services to be utilized.

2.4.1.2 Orientation and Purpose

The information system may be oriented to serve a certain generic class of users (“roles”) or to support a specific group of persons with names, identities and specific IT skills (“actors”). The system purpose might be either to facilitate and make more comfortable and efficient the work of the users, or to promote the change. In the later case, one can use the model of the planned change (Kolb and Frohman 1970; Sharma 2006, p. 45). The model is composed of seven steps to be performed in collaboration by the users and designers (the change agents) in the process of creating a DSS meant to facilitate implementing the change: • Exploring: evaluating the needs of the organization and necessary competences of the system constructors; • Entry: establishing the objectives to be agreed on by both organization empowered representatives and designers; • Diagnosis: collecting the data, defining the problem and estimating the neces- sary resources; • Planning: setting up the work plan and allocating the corresponding resources; • Action: designing and implementing the DSS and train its users; • Stabilization and evaluation: of the process and project impact; • Termination. 2.4 DSS Construction 53

2.4.1.3 Constraints and Standards

The target organization where the information system is to be implemented may strongly influence the solution and the process of the system building by imposing a set of constraints. There might be constraints caused by several factors such as: (a) the insufficient I&CT skills or/and confidence of the future users, and (b) scarce available data or/and limited internal data access rights of the external consultants. Several integration problems may show up caused by the legacy IT systems and infrastructure or/and the operating procedures permitted within the organization. Nowadays, standards play a central role in design. The International Standard Organization (ISO) is an excellent source of documents to be utilized to set the stage for useful, usable and used solutions. The standards for usable (traditionally called “user-friendly”) interfaces, such as those of the series ISO 9241(2016) (“Ergonomics of Human–System Interaction”), are recommended and can con- tribute to obtaining a user-centered solution. Other standards, such as ISO 9241- 171.2008 (“Guidance in Software Accessibility”) and ISO 9241-151.2008 (“Guidance WWW User Interface”), are of a particular importance in the context of modern information systems which are ever more oriented to use www technolo- gies. Galvan et al. (2015) recommend the new ISO/IEC 29110 standard for Project Management Process in very small teams settings. Other aspects, such as previous experience gained with implementing DSS within the organization, industry competitors’ initiatives, legislation pressure and, the most serious ones, available budgets and intended due dates, may also influence the design and construction process.

2.4.2 Design and Implementation Approaches

There are various approaches to designing, building and implementing an infor- mation system. They can be grouped in accordance with several aspects: • Orientation: software centric versus work centric; • Method to be adopted: the lifecycle one or the evolving/adaptive design which is based on the use of the prototype; • I&CT tools and platforms which will be utilized: general-purpose products versus integrated suites/generators/shells; • Source of components: buying IT products or using IT as a Service-ITaS,or Software as a Service-SaaS; • Place for construction: within the target organization or at the consultant’s site. 54 2 Decision Support Systems

2.4.2.1 Technology Oriented or Work Centric

In an article about Work System Theory (WST), Alter (2011) noticed:

The default assumption in much of the Information Systems (IS) discipline is that systems are technical artifacts that users use, rather than sociotechnical systems in which people participate. In contrast, WST’s default assumption is that human participants are essential elements of sociotechnical work systems, not just users of hardware and software. That is why the work system framework […] contains the term participants rather than users. A project collaboration approach would be less likely to misconstrue IS projects and IT projects. IS projects managed as work system projects might encounter less resistance and fewer surprises than IS projects managed as the creation and installation of IT artifacts. In the particular case of collaborative systems, Briggs et al. (2009, 2015) propose a multi-layer, socio-technical design model. They remark that there are many information technology products available on the market to support collaborative activities which, however, do not necessarily assure effective collaboration. The approach recommended to obtain the expected value of the technology consists in a “combination of actors, hardware, software, knowledge, and work practices to facilitate groups in achieving their goals in effective and efficient way”. Consequently, Briggs et al. (2015) identify six (initially seven) areas of concern for designers of collaborative systems, such as: (a) collaboration goals, (b) group deliverables, (c) group activities, (d) group procedures and techniques, (e) collab- oration supporting tools, and (f) collaboration behaviours. From the general model above, one can infer that the design of a multi-participant group DSS (viewed as a collaboration supporting tool) should take place only after several prerequisite steps have been completed in sequence. The subject will be developed in Sect. 3.4 where collaboration engineering is described.

2.4.2.2 The Prototype-Based Approach

The traditional lifecycle-based method requires several steps, such as: system analysis, design, implementation, and operation which are carried out in a sequential (“cascade”) manner. It also implies that the well-defined procedures and checkpoints are strictly observed and the solutions adopted are well documented. It is, consequently, recommended for large-scale applications. The origins of the prototype-based method (Shelly et al. 2010) in the field of DSS design could be traced back in mid 1970s in the empiric observation that 80 % of the design ideas in the field are wrong (Ness 1975). Consequently, it was proposed, in order to avoid the waste of resources, to accept to spend 20 % of the resources in the early stages of design and construction for identifying the 80 % wrong ideas, so that the remaining 80 % of resources could be utilized to imple- ment the remaining 20 % of ideas which are hoped to be correct. When adopting the prototype-based method, there are a few basic principles which are to be observed such as: 2.4 DSS Construction 55

• The process starts with approaching the most critical problems of the target organization, so that the user’s confidence could be gained as early as possible. The early requirements can be formulated in collaboration with the user in a quick and even simplified manner. • The information system is developed in several cycles which include activities such as prototype experimentation, evaluation, and modification. The cycles should be as short as possible and the cost of the first version must be very low, in order not to lose the user’s interest and confidence. • The evaluation of the usage of the preliminary versions is performed on a permanent time basis. Two main types of prototypes have been commonly utilized (Sprague and Carlson 1982): (a) the throwaway prototype and (b) the evolving one. While the former is only utilized to test the design ideas and then is discarded (the next versions are re-designed by possibly using new technologies and methods), the latter consists in a series of improvements of the initial version. The prototype-based (also called adaptive/incremental/iterative) methods allow for obtaining a highly customized, early utilizable and helpful solution, even though the information on organization and its business context could be, at the starting point of the process, scarce and uncertain. On the other hand, methods may cause a tendency to continually modify the solution or, on the contrary, to adopt too early a solution which is imperfect or incomplete. In (Filip 1995; Borne et al. 2013, Chap. 7), is presented the story of constructing DISPATCHER®, a family of Decision Support Systems (DSS) which are meant to support the logistics and production control decisions to be made in the milieu of the continuous process industries and related fields. The DISPATCHER® project started in early 80s as an optimization model and corresponding software for production scheduling. Since then, under the influence of various factors (such as the users’ changing needs and improvement of their I&CT skills, specific charac- teristic features of target enterprises, and new products and technologies released in the field of I&CT), several application versions were designed and deployed in industrial complexes made up of processing units interconnected through tanks, for example refineries, pulp and paper mills, chemical plants, and water systems. The initial application system has evolved towards a complex solution, a DSS gener- ator, which could be adapted to new business models (such as the collaborative “extended”/”networked”/”virtual” enterprise), to support new functions and usages. It includes new constituents, such as a three-level modeling scheme of the plant (expressed in terms of final users, analysts and programmers, respectively), AI (Artificial Intelligence)-based guiding facilities, and model solvers and experi- mentation tools (Fig. 2.4). 56 2 Decision Support Systems

Fig. 2.4 Evolution of DISPATCHER DSS (Filip 2012; Borne et al. 2013)

2.4.2.3 I&CT Products or Services?

An important design decision consists in making a choice between buying or leasing I&CT products or services. For example, in recent years, the approach to use SaS (Software as a Service)orITaS(IT as a Service) has become more and more popular. This new business model (Carraro and Chong 2006; Trumba 2007; Hine and Laliberte 2011) means that the software companies that provide SaaS services host on their servers the applications to be accessed on request, via Internet, by client organizations only when it is necessary. SaS is usually associated with cloud computing (Avram 2014). The pricing scheme could be based on monthly lease fees. It can be also of a pay-per-use type instead of initial license cost and annual maintenance fees. The SaS pricing scheme is apparently of particular interest for SME (small and medium enterprises) that have limited I&CT infrastructure and do afford hiring and training their own skilled personnel. On the other hand, when a decision is to be made, one should take into account long ongoing running costs and, especially, the data security and response time constraints. In addition, the organization management should be conscious of the fact they might become a “captive client” of the service provider. At present, evolving, modular and customizable platforms are available. An example is presented in Sect. 5.3. 2.4 DSS Construction 57

2.4.3 Selection of the I&CT Tools

The selection of the I&CT tools should be viewed as a multi-attribute decision- making (MADM) problem (Filip 2005; Peng et al. 2011; Kou et al. 2012; Stefanoiu et al. 2014, Chap. 4; Zavadskas et al. 2014).

2.4.3.1 Criteria

Several general criteria to be used in selecting and ranking the possible I&CT products available on the market which are adapted from (Le Blanc and Jelassi 1989; Dhar and Stein 1997; Power 2005, p. 10, 2008b) are presented in Table 2.5. A useful list of pitfalls to be avoided when selecting a software product is presented by Software Resources (2016). It includes the following 12 deadly mistakes: (a) buying the same software as the competitors, (b) buying software based on features alone, and overlooking other critical factors (scalability, flexi- bility, excessity, technology and cultural fit, affordable cost, insufficient technical support and infrastructure), (c) neglecting the proper consideration of the vendor reputation, (d) buying software without focusing on the implementation partner, (e) taking into consideration only the low initial costs and overlooking significantly

Table 2.5 Evaluation criteria Subset Evaluation criterion Preferred Collective value properties Adequacy of Accuracy of results ▲ Completeness methods Response time ▲ Tolerance for poor quality of ▲ data Explanation features ▲ Quality of Scalability ▲ Nonredundancy implementation Flexibility ▲ Code size ▼ Easy integration ▲ Informational and functional ▲ transparency Usability ▲ Decomposability Reliability and robustness ▲ Documentation completeness ▲ Acquisition and Delivery completeness ▲ Operability exploitation Price and delivery time ▼ Provider reputation ▲ Dependence on technical ▼ assistance ▲ The highest value is preferred; ▼ The lowest value is preferred 58 2 Decision Support Systems higher ongoing costs, (f) buying software using input from an elite group without getting buy-in from the organization at large, (g) choosing the popular software without considering all the possible and affordable options (h) buying software that is too complex, (i) making a choice without properly defining your requirements, (j) buying software that is either at the end or at the beginning of its product lifecycle, (k) buying software that is based on a “dying technology”, (l) selecting a software only to fix the current business problems, instead of implementing the change (see Sect. 2.4.1). A systematic methodology for software evaluation and selection through the usage of MADM (Multi-attribute decision model) was proposed by Moriso and Tsoukias (1997) and an experimental expert system is described by Vlahavas et al. (1999). At present, there exist several independent on-line services for supporting software evaluation and selection. Several examples are Technology Evaluation Center (TEC, 2016), Software Resources (2016), Project Perfect (PP 2016).

2.4.3.2 A Simple Collaborative Selection Algorithm

In Sect. 1.5.3.1, we presented a simple decision problem concerning the selection of the approach to be used in constructing an application information system. Section 1.5.3.3 contained several possible rules for aggregating the preferences of the people involved in the decision-making process. They took into account the decision powers of participants determined by their legal and rational position. In the sequel, a simple and practical method proposed by Coman (1996) is reviewed to give an example of a collaborative decision-making process. It is based on the Intensity-Polarity-Voting (IPV) model. Let assume that there are:

• nd individual decision-makers, Dk (k = 1,2,…, nd); • na possible courses of actions (alternative), Ai (i = 1,2,…, na); • nc evaluation criteria, ECj (j =1,…, nc). To aggregate individual preferences, the model uses two basic concepts: (a) the individual attitude of each member of the group towards the alternatives with respect to the set of criteria, and (b) the attitude of the group. The individual attitude is defined by two metrics:

1. The individual position of the participant Dk towards the alternative Ai with respect to the evaluation criteria EC j is expressed by a score (also called k attribute level,orperceived value), v ij, where: k, i, and j denote the participant, alternative and criterion, respectively. In addition, for each evaluation criterion, ECj, upper and lower acceptable limits (sometimes called thresholds), denoted by vu j and vl j, respectively, should be set. In the sequel, we will assume the scores are normalized, e.g. all variation domains are defined by the same limits, for example [0.1], or [0, 10]. 2.4 DSS Construction 59

2. The individual intensity of (or the consideration for) the position of participant D EC k k with respect to evaluation criterion j, is denoted by c j . The intensity depends either on the power position of the participant within the decision unit or his/her perceived or/and stated determination, competence and even ability to sustain his/her own position during debates concerning the j-th aspect of eval- uation. The intensity may take numerical values within a certain numerical interval or can be expressed through verbal statements, such as: “I believe”, “It is obvious” or “I am sure on this subject matter”, and “I insist”. Even in the latter case, numerical values can be associated to the verbal expressions. The group attitude is described by three metrics as follows:

1. The group position, gv ij, towards a certain alternative, Ai, with respect to the evaluation criterion, ECj. It is calculated as the center of gravity of individual positions:

Xnd ¼ k k= ckð ¼ ; ; ...; na; ¼ ; ...; Þð: Þ gvij vij cj j i 1 2 j 1 nc 2 1 k¼1

2. Group intensity regarding the group position towards a certain alternative, Ai, with respect to the evaluation criterion ECj. It is denoted by gc ij and is cal- culated as follows:

Xnd ¼ kð ¼ ; ; ...; na; ¼ ; ...; ncÞð: Þ gcij cj i 1 2 j 1 2 2 k¼1

3. The group polarization regarding the group collective position towards Ai with respect to ECj measures the lack of consensus in the group about a certain aspect of the selection process. It is denoted by gp and is calculated as follows:

Xnd ¼ ð ¼ ; ; ...; na; ¼ ; ...; ncÞð: Þ gpij pij i 1 2 j 1 2 3 k¼1

where: pij is the contribution of the participant Dk to the lack of consensus concerning the evaluation of alternative Ai with respect to evaluation criterion ECj and is calculated as follows:

¼ kð k À Þð ¼ ; ; ...; nd; ¼ ; ; ...; na; ¼ ; ...; ncÞð: Þ pij cj vij gvij k 1 2 i 1 2 j 1 2 4

The global merit of an alternative Ai is denoted by J i and is calculated as an aggregated utility function as follows: 60 2 Decision Support Systems

Xnc Ji ¼ wj gvij ð i ¼ 1; 2; ...; naÞð2:5Þ j¼1

where: the weights, w j(j =1,…, nc) indicate the importance of criteria. They take subunit values, are agreed in advance by all participants and fulfill the condition:

Xnc wj ¼ 1 ð2:6Þ j¼1

The group polarization on an alternative Ai is denoted by gp i and is calculated as:

Xnc gpi ¼ wj gpij ð i ¼ 1; 2; ...; naÞð2:7Þ j¼1

The computation of positions, intentions and polarizations can influence the multi-participant decision processes. Thus, an alternative that possesses the highest global merit might not be chosen in case the group polarization is over a certain limit which was agreed in advance or/and the group intensity is not high enough also agreed by all participants before evaluating the alternatives. At the same time, a participant that shows an uncommonly high contribution to polarization can ask for more information and justification, with the view to possibly reevaluating his/her position. Alternatively he/she can draw the attention of the rest of the group on certain hidden aspects that can be accepted or rejected.

2.4.4 Integration and Evaluation

In some cases, a new DSS has to be integrated into the existing or planned I&CT infrastructures of the target organization. Several principles are recommended by Vernadat (1996) which are still valid for technical integration, such as: • Adopting an open system architecture; • Neutralizing the information which can be achieved by using standardized data formats; • Semantic unification which means a symbol has a unique meaning throughout the whole system. There are, however, several new problems which can show up due to non- technical causes, for example: 2.4 DSS Construction 61

• Wrong orientation of the solution which does not facilitate solving the actual problems of the organization; this may be associated with informational opacity (the system provides more or less information than necessary outputs); • Functional opacity which means that he user is not given the necessary infor- mation and incentives to understand how the system works; • Frustration of the “hands-on” user due to a long response time or an un-adequate (insufficient or excessive) number of functions to perform his/her task. Evaluation of information systems has been a subject of interest for both system designers and users for long time (Hamilton and Chervany 1981; Marakas 2003). There are several main principles to be observed in the process of designing, building and implementing an information system, such as: • Evaluation is necessary in all phases of the design and implementation process. It is meant to support making a choice from the set of possible alternatives at a certain moment such as: giving up the project, continuation, supplementing or reducing the project budget, allocating additional manpower resources and so on; • Both the set of objectives and the degree of detail of evaluation depend on various factors, as: (a) the project scope, (b) technical complexity, (c) duration and cost of the project, (d) the person who requested the evaluation, (e) overall state of the target enterprise; • The presence of the designer in the evaluation team is necessary especially in the case of a large project. As above stated, the evaluation is meant to support a decision-making process. Consequently, a set of evaluation criteria should be utilized, namely: • Impact on of users’ professional performance in accomplishing their tasks: possible additional stress caused by the DSS usage, comfort of performing the task and so on; • The users’ quality of life and their general intellectual development; • Impact on overall evolution of the target enterprise; • Implementation aspects and expected further running costs. A more detailed set of criteria which was used in a specific setting is given by Al-adaileh (2009). A presentation of the methods used in multiparticipant decision-making will be made in Sects. 3.2 and 3.3. There are several methods which can be utilized for evaluation, for example: (a) benefits/cost analysis, though the NPV (“net present value”) of the investments, (b) value analysis, (c) “rating and scoring”, (d) event logging and so on. Agouram (2009) proposes a useful methodology to assess the success of implementation projects and Rhee and Rao (2008) present a complete methodology to evaluate DSS. For the specific case of IDSS, Phillips-Wren et al. (2009, 2011) propose an integrative, multi-criteria design and evaluation framework based on AHP (Analytic Hierarchical Process) methodology. 62 2 Decision Support Systems

2.5 Notes and Comments

This chapter closes with the enumeration of several ideas that deserve being retained by the reader as follows: • Decision-making can be viewed as a series of information processing activities starting with setting objectives pursued and ending with releasing for imple- mentation the chosen course of action. • There is a subtle difference between decision-making and decision-taking. While in the former case, the result of the information processing is represented by the course of action that was chosen, in the latter, the chosen alternative is firmly adopted and then released for execution by somebody who is empowered and accountable for his/her act. • The level of structuredness of a certain decision problem is determined by a series of attributes, such as: quality of information available in the moment the decision is made, the importance of choosing a right solution, urgency and novelty of the situation. The decision unit, the entity that makes and takes a decision, may consist in one or several persons. In the latter case, some unit members may form a human decision support team meant to facilitate the decision-making activities of the person (or persons) empowered to take the decision and release it for implementation. • There are several cognitive, time and cost-related limits and constraints that make difficult the task of decision-making and taking. • The decision support system (DSS) can be regarded as a computerized version of the human decision support team and is meant to relax the limits and con- straints which could be met in solving complex decision problems that count. • There are two main DSS conceptual frameworks: (a) the model of Bonczek, Holsapple and Whinston (composed of communication, knowledge and prob- lem processing subsystems); (b) the D/IDM (Dialog/Interface, Data, and Model) one. The latter can be viewed as a particular form of the former. • Real-time DSS for control represent a particular DSS subclass and are meant to support decision activities aiming at preventing and, if necessary, solving crisis situations. • There are several DSS classification schemes according to criteria, such as: levels of generality, number of participants, type of support provided, and the dominant technology. • There are notable tendencies to combine several technologies with a view to creating more intelligent and integrated DSS. Such systems incorporate novel and diversified technologies to support solving the ever more numerous decision problems of the present. • The DSS design implementation process is influenced by several factors, such as: the participants, the method used, the source of software, the standards observed and so on. • It is recommended to use a work-centric socio-technical approach, instead of a genuine software-centric process. 2.5 Notes and Comments 63

• Designing and implementing a DSS may represent an opportunity to manage the change implementation within the target organization. • Prototype-based method is recommended when the uncertainties are high and the involvement of system future users in design, as co-participants, is strongly pursued. • There are pros and cons for using the software as a service associated with cloud computing for DSS design and implementation. • There are several groups of criteria for choosing the appropriate software pieces for a particular application-oriented specific DSS. • In choosing the software solution, a simple and robust cooperative procedure for decision selection is exposed and recommended for utilization. As Hosack et al. (2012), stated “DSS research is alive and well”. A rather recent literature survey (Filip et al. 2014) confirms the above statement. The DSS domain is positively evolving. This is mainly due to new conceptual results, enabling information and communication technologies and ever increasing number of suc- cessful practical applications. The chapter addressed several essential aspects of DSS domain which are valid no matter whether the decision activities are carried-out by an individual or by a multi-participant decision unit. It was, however, not possible to address al elements which deserve more detailed presentations. The interested reader can find more information in references bellow. The classical books (Alter 1980; Bonczek et al. 1981; Sprague and Carlson 1982; Holsapple and Whinston 1996; Power 2002a) and the newer ones (Turban et al. 2005; Burstein and Holsapple 2008; Power et al. 2015) can provide the reader with comprehensive presentation of the DSS domain. Particular characteristic features that are specific for the systems meant to support multi-participant collaborative decision activities will be highlighted in the fol- lowing chapters. Several modern concepts in computer-mediated collaboration will be described in Chap. 3 together with the presentation of the methods used in multiparticipant settings. Major information and communication technologies that enable computer-supported multi-participant decision-making will be addressed in Chap. 4.

References

Agouram H (2009) Defining information system success in Germany. International Journal of Information Management 29: 129–137. Akerkar R (2014) Introduction to Artificial Intelligence. PHI Learning Pvt. Ltd., Delhi. Al-adaileh R M (2009) An evaluation of information systems success: A user perspective - the case of Jordan Telecom group. European Journal of Scientific Research, 37 (2): 226–239. Alter S (1977) A taxonomy of Decision Support Systems. Sloan Management Review, 19 (1): 9–56. Alter S (1980) Decision Support Systems: Current Practices and Continuing Challenges. Addison Wesley, Reading, MA. 64 2 Decision Support Systems

Alter S (2004) A work system view of DSS in its fourth decade. Decision Support Systems, 38(3): 319–327. Alter S (2011) The work system method: systems thinking for business professionals. In Proceedings of the 2012 Industrial and Systems Engineering Research Conference, Orlando, Florida. Ariav G, Ginzberg M J (1985) DSS design: a systemic view of decision support. Communications of the ACM, 28(10): 1045–1052. Avram M G (2014) Advantages and challenges of adopting cloud computing from an enterprise perspective. Procedia Technology, 12: p. 529–534. Beemer B A, Gregg D G (2008). Advisory systems to support decision making. In: Handbook on Decision Support Systems 1, Springer Berlin Heidelberg: p. 511–527. Bhargava H K, Power D J, Sun D (2007) Progress in web-based decision support technologies. Decision Support Systems, 43(4): 1083–1095. Bonczek R H, Holsapple C W, Whinston A B (1984) Developments in decision support systems. Advances in Computers, 23: 141–175. Bonczek, R.H., Holsaple, C. W., Whinston, A.B. (1981). Foundations of Decision Support Systems. Academic Press, New York. Borne P, Popescu D, Filip F G, Stefanoiu D (2013) Optimization in Engineering Sciences. Exact Metods, J. Wiley& Sons, London. Bosman A (1987) Relations between specific decision support systems. Decision Support Systems, 3(3): 213–224. Bowman M, Tecuci G, Ceruti M G (2001) Application of Disciple to decision making in complex and constrained environments. In: Systems, Man, and Cybernetics, 2001 IEEE International Conference on (Vol. 5): p. 2932–2940). Briggs R O, Kolfschoten G L, Vrede de G J, Albrecht C, Dean D L, Lukosch S (2009) A seven-layer model of collaboration; separation of concerns for designers of collaboration systems. In: Nunamaker Jr, J F, Currie W L eds. Proceedings of the International Conference on Information Systems (ICIS’09), paper 26. Briggs R O, Kolfschoten G L, Vrede de G-J, Albrecht C, Lukosch S, Dean D L (2015) A six-layer model of collaboration. In: Nunamaker J F, Romano Jr N C, Briggs R O eds. Collaborative Systems: Concept, Value, and Use. Routledge, Taylor & Francis Group, London: 211–227. Buraga S C, Cioca M, Cioca L (2007) Grid-based decision support system used in disaster management. Studies in Informatics and Control-SIC, 16(3): 283–296. Burstein F, Holsapple C eds (2008). Handbook on Decision Support Systems 2: Variations. Springer Science & Business Media, Berlin. Carraro G, Chong F (2006) Software as a Service (SaaS): An Enterprise Perspective, MSDN Library (available at https://msdn.microsoft.com/en-us/library/aa905332.aspx, accessed on 18. 03.2016). Chaturverdi A R, Hutchinson G K, Nazareth D L (1993) Supporting real-time decision-making through machine learning. Decision Support Systems, 10(2): 213–233. Cohen P R, Feigenbaum E A eds (2014) The Handbook of Artificial Intelligence. HeurisTech Press & William Heinemann Inc. Coman A (1996) Competence, power and conflict in group decision making. Human Systems Management, 15 (4): 245–255. Darren F, McGregor C, El-Masri S (2005) A survey of agent-based intelligent decision support systems to support clinical management and research. In: Proceedings of the 2nd International Workshop on Multi-Agent Systems for Medicine, Computational Biology, and Bioinformatics: 16–34. De Michelis G (1996) Co-decision within cooperative processes: analysis, design and implemen- tation issues. In: Humphrey P, Bannon L, McCosh A, Migliarese P, Pomerol J Ch eds (1996). Implementing Systems for Supporting Management Decisions: Concepts, Methods and Experiences. Chapman & Hall, London.: p. 124–138. DeSanctis G, Gallupe R P (1987) A foundation for the study of group decision support systems. Management Science, 33 (5): 589–609. References 65

Dhar V, Stein R (1997) Intelligent Decision Support Methods: The Science of Knowledge Work. Prentice Hall, Upper Saddle River, New Jersey. Dutta A (1996) Integrated AI and optimization for decision support: a survey. Decision Support Systems, 18(3): 213–226. Dzitac I, Bărbat B E (2009) Artificial intelligence + distributed systems = agents. International Journal of Computers Communications & Control, 4(1): 17–26. Eom S B (2002) Decision Support Systems Research (1970–1999) as Cumulative Tradition and Reference Disciplines, Edwin Mellen Press, Lewiston, NY. Eom S B (2003) Decision Support Systems Research and Reference Disciplines (1970–2001): A Research Guide to the Literature and an Unobtrusive Bibliography with Citation Frequency, Edwin Mellen Press, Lewiston, NY. Eom S B (2005) Evolution of DSS. From single user DSS to inter-organizational DSS. In: Eom S B editor, Inter-organizational Information Systems in the Internet Age Idea Group Publishing, London, 231–247. Feigenbaum E A (1977) The Art of Artificial Intelligence. 1. Themes and Case Studies of Knowledge Engineering (No. STAN-CS-77–621). Stanford Univ CA Dept of Computer Science. Filip F G (1995) Towards more humanized real-time decision support systems. In: Camarinha-Matos L, Afsarmanesh H eds, Balanced Automation Systems: Architectures and Design Methods, Chapman & Hall, London: p. 230–240. Filip F G (2005) Decizie asistata de calculator; decizii, decidenti, metode de baza si instrumente informatice associate (“Computer–Aided Decision-Making; Decisions, Decision-Makers. Basic Methods and Software Tools”), 2nd Edition, Editura Tehnica, Bucuresti (in Romanian). Filip F G (2007) Sisteme support pentru decizii (“Decision Support Systems”), 2nd Edition, Editura Tehnica, Bucuresti (in Romanian). Filip F G (2008) Decision support and control for large-scale complex systems. Annual Reviews in Control, 32(1): 61–70. Filip F G (2011) Designing and building modern information systems: a series of decisions to be made. Computer Science Journal of Moldova 19(2): 119–129. Filip F G (2012) A decision-making perspective for designing and building information systems. International Journal of Computers Communications & Control 7(2): 264–272. Filip F G, Barbat B (1999) Informatica industriala. Noi paradigme si aplicatii (“Industrial informatics. New Paradigms and Applications”). Editura TEHNICA (Technical Publishers), Bucharest (in Romanian]. Filip F G, Donciulescu D A, Filip Cr I (2002) Towards intelligent real time decision support systems. Studies in Informatics and Control-SIC, 11(4): 303–312. Filip F G, Donciulescu D A, Neagu G (1983) Job scheduling optimization in real-time production control. Computers in Industry, 4(4): 395–403. Filip F G, Leiviskä K (2009) Large-scale complex systems. In: Springer Handbook of Automation, Nof SY editor. Springer, Berlin Heidelberg: p. 619–638. Filip F G, Roberts P D, Zhang J (1992) Combined numeric – knowledge based hierarchical control; part II: process scheduling and coordination. Studies in Informatics and Control – SIC 1(4): 267–283. Filip F G, Suduc A M, Bîzoi M (2014) DSS in numbers. Technological and Economic Development of Economy, 20(1): 154–164. Ford F N (1985) Decision support systems and expert systems: a comparison. Information & Management, 8(1): 21–26. Franklin S, Graesser A (1996) Is it an agent, or just a program?: A taxonomy for autonomous agents. In: Intelligent agents III agent theories, architectures, and languages. Springer Berlin Heidelberg: p 21–35. Gallupe R B (1986). Experimental research into group decision support systems: practical issues and problems. In: Proceedings of the 19th Hawaii International Conference on Systems Science. Honolulu, Hawaii, January: p. 513–523. 66 2 Decision Support Systems

Galvan S, Mora M, O’Connor R V., Acosta F, Alvarez F (2015). A Compliance Analysis of Agile Methodologies with the ISO/IEC 29110 Project Management Process. Procedia Computer Science, 64:188–195. Gowri S, Vigneshwari S, Sathiyavathi R, Lakshmi T K (2016) A framework for group decision support system using cloud database for broadcasting earthquake occurrences. In: Proceedings of the International Congress on Information and Communication Technology, Springer: p. 611–615. George J F (1991) The conceptualization and development of organizational decision support systems. Journal of Management Information Systems, 8 (3): 109–125. Gray P (1987) Group decision support systems. Decision Support Systems, 3(3): 233–242. Gray P, Nunamaker J F (1993) Group decision support systems. In: Decision Support Systems: Putting Theory into Practice. Third Edition. Prentice Hall International Inc., London.: p. 309– 326. Gray P, Johansen B, Nunamaker J, Rodman J, Wagner G R (2011) GDSS past, present, and future. In: Decision Support: An Examination of the DSS Discipline. Schuff D, Paradice D, Burstein F, Power D J, Sharda R eds. Springer New York: p. 1–24. Hackathorn, R D, Keen P G W (1981) Organizational strategies for personal computing in decision support systems. MIS Quarterly, 5(3), 21–27. Holsapple C W (1991) Decision support in multiparticipant decision makers. Journal of Computer Information Systems, 31(4): 37–45. Hosack B, Hall D, Paradice D, Courtney J F (2012) A look toward the future: decision support systems research is alive and well. Journal of the Association for Information Systems, 13(5): 315. Hamilton S, Chervany N L (1981) Evaluating information system effectiveness - Part I: Comparing evaluation approaches. MIS Quarterly, 5 (3): 55–69. Hine J, Laliberte B (2011) Enabling IT-as-a-Service. White Paper. ESG (available at: http://docs. media.bitpipe.com/io_10x/io_102870/item_483065/enabling%20it-as-a-service.pdf, accessed on 18.03.2016). Holsapple C W (2008) DSS architecture and types. In: Handbook on Decision Support Systems 1, Burstein F, Holsapple C eds. Springer, Berlin Heidelberg: p. 163–189. Holsapple C W, Whinston A B (1996) Decision Support Systems: A Knowledge-based Approach. West Publishing Co., Minneapolis/St. Paul, Minnesota. Hosack B, Hall D, Paradice D, Courtney J F (2012) A look toward the future: Decision Support Systems research is alive and well. Journal of the Association for Information Systems, 13(5): 315–340. Hribar Rajterič I (2010) Overview of business intelligence maturity models. Management: Journal of Contemporary Management Issues, 15(1): 47–67. ISO 9241 (2016). Ergonomics of human-system interaction. (available at: http://www.iso.org/iso/ search.htm?qt=9241&sort=rel&type=simple&published=on; accessed on 18.03.2016). Kaklauskas A (2015) Biometric and Intelligent Decision Making Support. Springer Verlag, Berlin. Kim E, Eom S B (2004) Decision Support Systems Applications Research: A Bibliography (1995– 2001), Working Paper, Southern Missouri State University. Kivijärvi H (1997) A substance-theory-oriented approach to the implementation of organizational DSS. Decision Support Systems, 20(3): 215–241. Klingour M, Eden C (2010) Introduction to the handbook of group decision and negociation. In: Handbook of Group Decision and Negotiation, Klingour M, Eden C, eds, Springer Science + Business Models, Dordrecht: p. 1–7. Kolb D A, Frohman A L (1970) An organization development approach to consulting. Sloan Management Review, 12(1): 51–65. Kou G, Lu Y, Yi P, Shi Y (2012) Evaluation of classification algorithms using CDM and Rank Correlation, International Journal of Information Technology & Decision Making, 11(1),:197– 225. Le Blanc L A, Jelassi MT (1989) DSS software selection: a multiple criteria decision methodology. Information & Management, 17(1): 49–65. References 67

Licklider J C R (1960). Man–computer symbiosis. IRE Transactions on Human Factors in Electronics, Vol. HFE1, 4–10. Liu S, Duffy A H, Whitfield, R I,, Boyle I M (2010) Integration of decision support systems to improve decision support performance. Knowledge and Information Systems, 22(3): 261–286. Liu S, Smith M H, Tuck S, Pan J, Alkuraiji A, Jayawickrama U (2014) Where can knowledge-based decision support systems go in contemporary business management—A new architecture for the future. Decision Support Systems (KB-DSS), 1(5): 499–504. Marakas G M (2003) Decision Support Systems in the 21st Century. Prentice Hall, Upper Saddle River, New Jersey. McCosh A M (2002) E-mail to D. Power on 03 oct. 2002. In: (Power, D., 2002 b). Monostori L, Barschdorff D. (1992) Artificial neural networks in intelligent manufacturing. Robotics and Computer-Integrated Manufacturing, 9(6): 421–437. Moriso M, Tsoukias A (1997) JusWare: a methodology for evaluation and selection of software products. IEE Proc. Softw. Eng. 144 (2): 162–174. Naylor T H (1982) Decision support systems or whatever happened to M.I.S?, Interfaces, 12 (4): 92–94. Nicoară E S, Filip, F.G, Paraschiv N (2011) Simulation-based optimization using genetic algorithms for multi-objective flexible JSSP. Studies in Informatics and Control, 20(4): 333– 344. Nunamaker J F, Dennis A R, Valacich J S, Vogel D, George JF (1991) Electronic meeting systems. Communications of the ACM, 34(7): 40–61. Ness D N (1975) Decision support systems. Theory and design. Wharton Office of Naval Research on DSS. Philadelphia, November: p. 4–7. Peng Y, Kou G., Wang G, Shi Y (2011) FAMCDM: A fusion approach of MCDM methods to rank multiclass classification algorithms. Omega, 39(6): 677–689. Phillips-Wren G, Mora M, Forgionne G (2011) Assessing Today: Determining the Decision Value of Decision Support Systems. In Decision Support. Springer New York: 203–220. Phillips-Wren G, Mora M, Forgionne G A, Gupta J N (2009) An integrative evaluation framework for intelligent decision support systems. European Journal of Operational Research, 195(3): 642–652. Pomerol J C (1997) Artificial intelligence and human decision making. European Journal of Operational Research, 99(1): 3–25. Power D J (2001) Supporting decision-makers: an expanded framework. In e-Proceedings Informing Science Conference, Challenges to Informing Clients: A Transdisciplinary Approach, Krakow, Poland: p. 431–436. Power D J (2002a) Decision Support Systems: Concepts and Resources for Managers. Quorum Press. Westport, Connecticut. Power D J (2002b) A brief history of decision support systems. DSS Resources. COM, World Wide Web (version 4.0 2007 available at: http://DSSResources.COM/history/dsshistory.html, con- sulted on: 12.03.2016). Power D J (2002c) When is “real – time” decision support desirable and needed? DSS News, 3 (25), 08 December. Power D J (2005) Decision Support Systems. Frequently Asked Questions. iUniverse Inc., New York. Power D J (2008a) Decision support systems: a historical overview. In Handbook on Decision Support Systems, Springer Berlin Heidelberg: p. 121–140. Power D J (2008b) Understanding data-driven decision support systems. Information Systems Management, 25(2):149–154. Power D J (2011) Challenges of real-time decision support. In: Supporting Real Time Decision-Making, Annals of Information Systems 13, Burstein F et al eds. Springer Science + Business Media, LLC: p. 3–11. Power D J, Phillips-Wren G (2011) Impact of social media and Web 2.0 on decision-making. Journal of decision systems: 20(3), 249–261. 68 2 Decision Support Systems

Power D J (2014) Using ‘Big Data’for analytics and decision support. Journal of Decision Systems, 23(2): 222–228. Power D J, Karpathi S (2002) Building web-based decision support systems. Studies in Informatics and Control.-SIC, 11(4): 291–302. Power D J, Sharda R (2007) Model-driven decision support systems: concepts and research Power D J, Sharda R, Burstein F (2015) Decision Support Systems. John Wiley & Sons, Ltd. PP (2016). Project Perfect: Project Management Software.(http://www.projectperfect.com.au/, accessed on 18.03.2016). Rhee C, Rao H R (2008) Evaluation of decision support systems. In: Burstein F, Holsapple C eds. Handbook on Decision Support Systems 2: Variations. Springer Berlin Heidelberg: p. 313– 327. Ricci F, Rokach L, Shapira B, Kantor P (2011). Recommender Systems Handbook. Springer Science + Business Media, LLC. Scott Morton M S (1967) Computer – Driven Visual Display Devices: Their Impact on the Management Decision-Making Process. Doctoral Dissertation, Harvard Business School. Sharma R S (2006) Change Management. Tata Mc Graw- Hill, New Delhi. Shelly G B, CashmananT J, Rosenblatt HJ (2010) Systems Analysis and Design,8th Ed., Thomson Course Technology, Boston, Mass. Sheridan T B, Verplank W (1978) Human and Computer Control of Undersea Teleoperators. Man-Machine Systems Laboratory, Dept. of Mechanical Engineering, MIT, Cambridge, MA. Shim J P, Warkentin, M, Courtney J F, Power D J, Sharda R, Carlsson C (2002) Past, present, and future of decision support technology. Decision support systems, 33(2): 111–126. Simon H. (1960) The New Science of Management Decisions. Harper Row, NewYork. Simon H. (1977) The New Science of Management Decisions. Revised Edition. Prentice Hall, Englewood Cliffs, New Jersey. Simon H (1987) Two heads are better than one: the collaboration between AI and OR. Interfaces, 17(4): 8–15. Soelberg P O (1967). Unprogrammed Decision Making. Industrial Management Review, 11 pg. Sprague Jr, R H (1980) A framework for the development of decision support systems. MIS Quarterly, 4(4), 1–26. Sprague Jr R H (1987) DSS in context. Decision Support Systems, 3, 197–212, republished in: (Sprague, Watson eds, 1993): p. 29–40. Sprague Jr., R H, Carlson E D (1982) Building Effective Decision Support Systems. Prentice Hall, Englewood Cliffs, N. J. Software Resources (2016) 12 Deadly Mistakes of Software Selection (available at: http://www. softresources.com/resources/articles/12-deadly-mistakes-of-software-selection, accessed on 18. 03.2016). Software Resources (2016) Smart Software Selection (available at: http://www.softresources.com, accessed on 18.03.2016) Stefanoiu D, Borne P, Popescu D, Filip F G, El Kamel A (2014) Optimization in Engineering Sciences: Metaheuristic, Approximate Methods and Decision Support. John Wiley & Sons, London. TEC (2016) Technology Evaluation Center. (http://www.technologyevaluation.com/ accessed on 18.03.2016). Tecuci G (1998) Building Intelligent Agents: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies. Academic Press, San Diego, CA. Tecuci G., Marcu D, Boicu M, Vu L (2007) Mixed-initiative assumption- based reasoning for complex decision-making. Studies in Informatics and Control-SIC, 16(4), 459–468. Tecuci G, Marcu D, Boicu M, Schum D A (2016) Knowledge Engineering: Building Cognitive Assitants for Evidence-based Reasoning. Cambridge University Press. Trumba (2007). White Paper: Five Benefits of Software as a Service (available at: http://www. trumba.com/connect/knowledgecenter/software_as_a_service.aspx, accessed on 09.02.2012). Turban E, Aronson J E, Liang TP (2005) Decision Support Systems. 7th ed., Pearson – Prentice Hall, Upper Saddle River, New Jersey. References 69

Turban E, Watkins P R (1986) Integrating expert systems and decision support systems. MIS Quarterly, 10(2): 121–136. Turoff M, White C, Plotnick L (2011). Dynamic emergency response management for large scale decision making in extreme hazardous events. In: Supporting Real Time Decision-Making, Annals of Information Systems 13, Burstein F et al eds. Springer Science + Business Media, LLC:: p. 181–202. Van de Walle B, Turoff M (2008) Decision support for emergency situations. Information Systems and E-Business Management, 6(3): 295–316. Vazsonyi A (1982) Decision support systems, computer literacy and electronic models. Interfaces, 12 (1): 74–78. Vernadat A (1996) Enterprise Modeling and Integration Principles and Applications. Chapmann & Hall, London. Vlahavas I, Stamelos I, Refanidis I, Tsoukias A (1999) ESSE: an expert system for software evaluation. Knowledge-Based Systems, 12(4): 183–197. Wagner G R (1981) Decision support systems: The real substance. Interfaces, 11(2): 77–86. Watson H J, Rainer Jr R K, Koh C E(1991) Executive information systems: a framework for development and a survey of current practices. MIS quarterly 15(1):13–30.\ Young L F(1983) Computer support for creative decision-making: right-brained DSS. Processes and Tools for Decision Support: p. 2001–2004. Zamfirescu C B (2003) An agent-oriented approach for supporting self facilitation for group decisions. Studies in Informatics and Control-SIC, 12(2): 137–148. Zaraté P (2013) Tools for Collaborative Decision-making. John Wiley & Sons. Zavadskas E K, Turskis Z, Kildienė S (2014) State of art surveys of overviews on MCDM/MADM methods. Technological and Economic Development of Economy, 20(1): 165–179. Chapter 3 Collaborative Activities and Methods

Having described the context for collaborative activities (in Chap. 1), and reviewed the basic aspects of computer supported decision-making activities (in Chap. 2), we will present in this section several specific methods used in collaborative decision making. The methods and techniques presented in the chapter are independent of the information technologies upon they are instantiated. In the chapter, we address the concepts of computer supporting collaborative activities and provide a methodological background as regards the aggregation of collective preferences and choices. The chapter is organized in four sections, as follows. In the first section we define collaborative activities carried-out by par- ticipants organized in groups and review the evolution of computer and commu- nication support provided to groups. The second section reviews the most frequently used voting rules defined in social choice theory to generate from the individual preferences a collective choice of the group. In modern applications these aggregation rules are not exclusively used for the choice phase from the classical model of decision-making, but are equally applicable to the design phase when the collective view over the individual judgments and arguments may require a certain level of rationally. The third section surveys these latest achievements from social choice theory that are employed in knowledge-driven DSS. The last section addresses the engineering issues of deploying computer supported collab- orative activities in real-working environments.

3.1 Computer Supported Collaboration

3.1.1 Collaboration, e-Collaboration and Collaborative Groups

The formal definition of collaboration recommended in (Camarinha-Matos et al. 2009; Nof et al. 2015, p. 3; Zhong et al. 2015) was adopted and presented in

© Springer International Publishing AG 2017 71 F.G. Filip et al., Computer-Supported Collaborative Decision-Making, Automation, Collaboration, & E-Services 4, DOI 10.1007/978-3-319-47221-8_3 72 3 Collaborative Activities and Methods

Sect. 1.1.2. Collaboration was defined as a special type of interaction of several entities, such as organizations, humans and machines that exchange information for mutual benefits, harmonize their major goals and objectives and share resources, action plans, and responsibilities to attain the common goals.

3.1.1.1 The Concept

Since the border between collaboration and cooperation might look very fuzzy, the antonyms of those two concepts can be used to differentiate them, one from the other. Working independently is opposed to collaboration and competition is the opposite of cooperation. As pointed out in (Zhong et al. 2015; Nof et al. 2015), two or more entities collaborate, because each one working individually cannot deliver the expected output, such as a product, a service or a decision solution. This type of common work is called mandatory collaboration. Two or more entities might also start collaborating because they aim at improving the quality of their deliverables or/and to achieve higher values for all of them. This is called optional collaboration (Zhong et al. 2015). Nof et al. (2015) discuss the interactions and collaboration links that can be established among various types of entities, such as humans, computers, machines, mobile communication, devices, sensors, actuators, and Internet. In this book, the emphasis is put on collaborating human agents (called participants) organized in groups (or teams) who interact via electronic devices in order to make management and/or control decisions. A collaborative group is made up of several members who are assigned or decide by themselves to jointly attain a set of common goal by carrying out a set of activities and using a number of procedures and techniques. At the same time, the group members who participate to the joint effort have their own individual goals, objectives criteria and methods. Several attribute that characterize a collaborating group are reviewed by Chen and Briggs (2015) and Briggs et al. (2015): • Congruence of goals and methods that is defined as the degree to which indi- vidual goals and methods of the group members are compatible and served by the adopted/assigned goals, activities and procedures of the group as a whole; • Group effectiveness that measures or estimates the degree to which the group are attained; • Group efficiency that is defined as the degree to which the group saves members resources in the attempt to attain the group goals; • Group cohesion that indicates the degree to which group members preserve the willingness to further collaborate. In Sect. 2.1.1, we presented the process model of decision making that was proposed by Herbert Simon. In the presentation, it was implicitly assumed that the decision-maker was an individual. In case several persons communicate and 3.1 Computer Supported Collaboration 73 collaborate with a view to making a decision, an initial preparatory phase named group forming should be added to the model (Turban et al. 2011). The main activities to be carried out during this specific phase are: identifying the group members, building trust and adopting an appropriate collaboration style, and facilitating communication. Communication is an essential ingredient of collaboration. It consists of the exchange of information among individuals organized in groups. From a commu- nication perspective, the groups are characterized by several attributes, such as: • place of work: same place or different places; • moment of interaction: synchronous (same time) or asynchronous (different time); • type of interaction: direct or indirect and mediated; • size: small teams or large and very large (crowds) groups. There are cases, when the collaborating participants, especially in large groups, though transmitting pieces of information, might not be aware of one another. For example, the people who express their evaluation of a product or a service in collaborative filtering of the recommender systems (Ricci et al. 2011) simply pool their opinions. The participants to several crowdsourcing processes, such as crowd creation and crowd voting (Chiu et al. 2014), do not commonly establish direct communication links among themselves. It is worth remarking that human communication does not necessarily imply collaboration. Turoff (1991) presents variation ranges of the content of human communication over five dimensions, such as: • Cooperation: from friendly and cooperative to competitive and hostile; • Intensity: from intense and grossed to superficial and uninvolved; • Dominance: from democratic and equal to autocratic and unequal; • Formality: from personal and informal to impersonal and formal; • Orientation: from productive and task oriented to unproductive and no objectives. Since early times, people used various natural means or/and artifacts to make communication and collaboration possible and effective. In (Kock 2009; Kock et al. 2001) is proposed an operational definition of electronic collaboration (e-collabo- ration) as the “collaborative work using electronic technologies among different individuals to accomplish a common task”. The above definition goes beyond the traditional view of computer supported collaboration and may include as collabo- ration means other technologies, such as the telephone or teleconference tools as TV cameras, screens and communication equipment (Kock 2005, 2009; Kock and Nosek 2005). 74 3 Collaborative Activities and Methods

3.1.1.2 Subclasses

The following subclasses can be identified as particular forms of the general superclass of e-collaboration (Shim et al. 2002; Turban et al. 2011; Nunamaker et al. 2015a); • CSCW (Computer Supported Cooperative Work) and the associated groupware technologies that are focused on cooperative works ranging from group editing to workflow and design of software. Koch et al. (2015) find the origins of CSCW in the workshop organized in 1984 by Irene Greif and Paul Cashman, where people belonging to different disciplines met with a view to better understanding how information technology could be used to improve and enhance group outcomes. According to Nunamaker et al. (2015a), CSCW processes “are not designed in advance and emerge in an natural way over the collaboration activities to the satisfaction and creativity of the participants organized in small groups” • GSS (Group Support Systems) that are related to the usage of electronic meeting systems (EMS) to support the set of decision-making activities or parts of it (Dennis et al. 1998; Suduc et al. 2009). Nunamaker et al. (2015a) and Koch et al. (2015) state that GSS, in contrast with CSCW, are meant for more numerous and better organized groups (composed of ten to hundreds people) with a view to improving the effectiveness of the participants’ work. The main interest of this book is focused on those GSS that are meant to support decision-making activities carried out by groups of participants. Those systems are called GDSS (Group Decision Support System) and form a specific subclass of GSS.

3.1.2 Brief History of e-Collaboration

The e-collaboration solutions meant to support group work have gone through several stages of evolution and usage. The telegraph, which was invented by S.F.B. Morse in the mid-1800s century, and the telephone, patented by A. Graham Bell in the eighth decade of the nineteenth century, might be viewed as early enablers of e-collaboration. However, as pointed out by Kock (2005, 2009), Kock and Nosek (2005), and Kock et al. (2001), those electronic devices were not enough to announce the starting of the e-collaboration era. The above authors state that neither the first commercial mainframe computers that emerged after the Second World War (WW II) were appropriate tools for e-collaboration, because they were more expensive than the labor costs. They were used only to solve specialized problems in the highly centralized enterprises of the time, where the collaboration activities of people organized in groups was not too frequently met. The computer-mediated communication (CMC) technologies, meant to support remote work, the early decision rooms conceived to support face-to-face and synchronous settings, and the 3.1 Computer Supported Collaboration 75 associated software products had a decisive contribution to entering the e-collaboration era.

3.1.2.1 Milestones in Computer Mediated Communication

There are opinions (Kock and Nosek 2005) that the electronic mail was the first essential enabling technology for e-collaboration. Nunamaker et al. (2015a) men- tion MAILBOX, a proto email system and SNDMSG, an early messaging system, both developed at the Massachusetts Institute of Technology (MIT) in mid-1960s. (Tomlinson 2009), the father of electronic mail, describes the first technical solu- tions based on SNDMSG. In Kock (2009), Sajithra and Patil (2013), Nunamaker et al. (2015a), several other major initiatives and technology issues that stimulated the advance of e-collaboration are viewed as milestones in CMC development: • ARPANET (the Advanced Research Project Agency Network), which imple- mented in 1967 the first packet switching connection between two computers, one situated in UCLA (University of California Los Angeles) and the other placed at SRI (Stanford Research Institute); • The first integrated digital platform named oN Line System (NLS) demonstrated in operation by Douglas Carl Engelbart at the Fall Joint Computer Conference in San Francisco in December 1968. The platform, which was later called “the Mother of All Demons”, could support up to 15 workstations and provided various functions, such as: computer-based chat, videoconferencing, screen sharing, document retrieval, practical employment of hyperlinks and so on; • The SMTP (Simple Mail Transfer Protocol) that was invented in the early 1970s and enabled sending electronic messages to specific users connected to the computers of a network; • The LAN (Local Area Network), the prototype of which was the Cambridge Ring of computers, developed in 1974 (Hopper and Needham 1988). The first commercial LAN was based on the Ethernet protocol and developed at Xerox Park in 1973–1975.

3.1.2.2 Decision Rooms

Peter Gray stated in (Gray et al. 2011) that the roots of the present GSS (Group Support Systems) can be traced back in the early decision rooms, inspired from Churchill’s War Room during the Second World War, and developed in the US universities, starting from the mid-1960s. In this respect, Engelbert’s Decision Room installed at SRI is considered as one of the first implementations of a GSS. It enabled one to retrieve pieces of information, such as text and pictures from the central computer, and display them on the screens of other participants. The usage of the newly invented mouse meant to control the movements of the cursor on the screen is associated with Englebart’s Decision Room. 76 3 Collaborative Activities and Methods

Other decision rooms were subsequently created in academia environment (The University of Southern California, The Southern Methodist University in Dallas, The Claremont Graduate University, The University of Arizona in Tucson, The University of Minnesota in Minneapolis, The London School of Economics and so on) and industry (IBM, The Nippon Electric in Tokyo, The Metapraxis in London) (Gray et al. 2011). The early decision rooms implemented the same-place same- time model of GSS.

3.1.2.3 GSS Software and Systems

Policy Delphi, first introduced in 1969, and EMISARI (Emergency Management Information Systems And Reference Index), created by Murray Turoff in 1971 and used for about 20 years to support decision making during national emergencies, EQUAL, meant to handle external information requests within IBM, are among the first GSS-type software systems described by Turoff (1991), G. R. Wagner men- tioned other software systems which set the stage for the GSS of the present, such as Mindsight meant to be used in the Planning Laboratory, Vision and so on (Gray et al. 2011). The collaborative electronic brainstorming system created by J. F. Nunamaker and Benn Kosynski in 1977, SAMM (Software Aided Meeting Management) and Group System, both developed in the late-1980s, witnessed the feasibility and effectiveness of the GSS (Nunamaker et al. 2015a). In more recent times, Turban et al. (2011) describe what they call collaboration 2.0, which is characterized by the deployment of Web 2.0 technologies, such as social software tools and services (social networks and media opinion pools, wikis, blogs and so on) to make collaboration ever more effective and efficient way. The above authors cite papers of the domain literature which indicate that the group decision-making supported by the earlier tools, called collaboration 1.0, showed a series of drawbacks, such as: (a) work fragmentation, (b) slow and difficult man- agement of operations, (c) high costs, and (d) limited application to groups composed of people who know each other. The usage of software tools based on Web 2.0 and later versions in computer supported decision making processes was viewed as a possible way to overcome the problems cited above. In Table 3.1, adapted from (Turban et al. 2011), it is presented a comparison of [computer-supported] collabo- ration 1.0 and collaboration 2.0. The subject will be addressed later, in Sect. 4.2. Beside web technology, other recent concepts and technologies already in use, such as cloud computing, service-oriented architectures, Internet of things, business intelligence and analytics (see Chap. 4), have contributed to enabling a new manner of supporting collaboration. An analysis of modern technologies for e-collaboration is made by Mittleman et al. (2015). 3.1 Computer Supported Collaboration 77

Table 3.1 Collaboration 2.0 versus collaboration 1.0 (adapted from Turban et al. 2011) Criterion Collaboration 1.0 Collaboration 2.0 Platform Proprietary Open source Usability (ease of Reduced High use) Interactivity Low High Structuredness Structured Unstructured Application of Created by the enterprise Easily created by the user add-ons Number of Small to medium groups Possibly unlimited, crowds participants User type Trained or helped by a facilitator Hands on, “homo digitalis” of operations Contacting By e-mail Social networks, forums, external experts crowdsourcing Necessary LAN, intranets, VAN Internet, social networks, mobile infrastructure cloud computing Costs High Reduced

3.1.3 More About Group Support Systems

In Sect. 3.1.1, we presented two types of computer-based e-collaboration solutions, namely computer-supported cooperative work (CSCW) and group support systems (GSS), initially called electronic meeting systems) (EMS). Nunamaker et al. (2015a) remarked that, in early 1990s, the computer-based collaboration movement, which was initially interested in facilitating the group decision-making activities, split into the above two branches, CSCW and GSS. Kock and Nosek (2005) noticed that CSCW and GSS or GDSS (Group Decision Support Systems, a specific subclass of GSS) can be differentiated by several distinct characteristics, such as: (a) traditions, (b) lines of research and (c) communities of researchers and authors of reported results. Dennis et al. (1998) noticed that while GDSS are “more focused on task support”, CSCW tools “provide general communication support”. In the sequel, the presentation will elaborate over the characteristics of GSS. Steiner (1972), Nunamaker et al. (1991), Gray et al. (2011), Kolfschoten and Nunamaker (2015b) identify several problems that might show up in unsupported group work and can be solved by using computer-based e-collaboration tools: • The declining effectiveness of an unsupported group work when the group size is over a limited number, say five persons; • Groupthink caused by a strong authoritarian leadership, time or/and external pressure, isolation and high homogeneity of the group, when the participants have similar intentions and interests; 78 3 Collaborative Activities and Methods

• Cognitive overload caused by multiple interactions and tasks; • The fear for possible negative consequences on group members, sometimes associated with their indecisiveness and lack of firm commitment; • Premises for misunderstanding in case the participants speak different languages or possess different cultural backgrounds; • Fuzzyness in expressing the goals pursued in the decision process; • High costs to organize the meetings associated with recent threats of terrorism; • The consequences of once adopted group decisions, which cannot be ignored or easily cancelled or reversed even through their impact, might be viewed as unfavorable by individual participants. The major desired characteristic features of a group support system are presented by Kolfschoten and Nunamaker (2015b) and Nunamaker et al. (2015b): • Parallelism, meant to avoid the waiting time of participants who want to speak in an unsupported meeting by enabling all users to add, in a simultaneous manner, their ideas and points of view; • Anonimity, that makes possible an idea be accepted based on its value only, no matter what position or reputation has the person who has proposed it; • Memory of the group, that is based on long term and accurate recording of the ideas expressed by individual participants and conclusions that were reached by the group; • Improved precision, of the contributions which were typed-in compared with their oral presentation; • Unambiguous display, on computer screen of the ideas and points of views; • Any time and/or any place, operation that enable the participation of all relevant persons, no matter their location. A comparison of face-to-face unsupported meetings with computer aided ones is presented in Table 3.2.

Table 3.2 Comparing face-to-face meetings with computer supported group work (inspired from Gray et al. 2011; Roy 2012; Shim et al. 2002) Attribute Face to face Computer aided meeting unassisted meeting Sense of High Low community Communication Verbal, Written Signs paraverbal, Mostly formal Style nonverbal Mostly informal Commitment Immediate, on Cautious place Place Same place Same place (decision room) or distributed (several (meeting room) rooms or on-the-go enabled by mobile devices) Time Synchronous Synchronous or/and asynchronous 3.1 Computer Supported Collaboration 79

The participants to virtual (distributed) computer supported should possess several skills concerning relationship building, communication, collaboration, and technological skills, (Roy 2012). In order to be effectively used, a group support system should be not only useful and usable, but its deployment should be feasible. In this respect, Munkvold and Anson (2001), Roy (2012), Kolfschoten and Nunamaker (2015a, b) identify several possible barriers and conditions to be observed. The main conditions are the same ones which are met in any change management project: • There must be always a “champion” for GSS project implementation; • The cognitive load is acceptable; • The GSS usage benefits should be noticeable. In Sects. 3.3 and 3.4, we will present how the choices are made by groups and the concept and methods of collaboration engineering (CE) that is meant to facilitate the implementation of GSS, respectively.

3.1.4 Crowdsourcing—A Special Case of Collaboration

There are several decision situations when the problems to be solved are very complex, persistent and a solution cannot be found by managers and their close and permanent assistants. In such cases, a working solution is to ask other people’s collaboration, such as external experts, business partners or even anonymous clients for enterprise products or services. In Sect. 1.1.1, we mentioned the concept of crowdsourcing, a new web-based business model meant to enable finding creative and effective solutions from a very large group of people in answer to an [usually] open call for proposals. The crowdsourcing definition was proposed in (Howe 2006)as“the act of taking a job traditionally performed by a designated agent (usually an employee) and out- sourcing it to an undefined, generally large group of people in the form of an open call”. A more integrated definition is provided by Estellés-Arolas and Gonzales-Ladron-de Guevara (2012):

Crowdsourcing is a type of participative online activity in which an individual, an insti- tution, a non-profit organization or a company proposes to a group of individuals of varying knowledge, heterogeneity and number, via flexible open call, the voluntary undertaking a task. The undertaking of the task, of variable complexity and modularity, and in which the crowd should participate bringing their work money, knowledge and experience, always entails mutual benefit. The user will receive the satisfaction of a given type of need, be it economic, social recognition, self-esteem or the development of individual skills, which the crowdsourcer will obtain and utilize to their advantage that what the user has brought to the venture whose form will depend on the type of activity undertaken. The initial particular subclasses of the general class of crowdsourcing activities were described in (Howe 2006; Chiu et al. 2014): 80 3 Collaborative Activities and Methods

• Collective intelligence, meant to build the “wisdom of the crowd” to provide new insights and ideas to innovate and create products, processes and services; • Crowd creation, meant to enable the users to generate various types of content to be shared with other people; • Crowd voting, that is intended to collect the opinions and ratings of the people, ideas, actions to be performed or products and services to be launched; • Crowd funding, which, however, is viewed by Brabham (2013, p. XXIII) as a stand-alone concept. Brabham (2008, 2013) and Chiu et al. (2014) remark the potential of crowd- sourcing to support problem-solving and hard managerial decision-making. The same authors also warn about possible bad outcomes when the crowd’s ideas and opinions are used in an inappropriate way. Chiu et al. (2014) propose a framework to describe the usage of crowdsourcing to support decision making activities of Herbert Simon’s process model (see Sect. 2.1.1). The phases of the model are: • Identification of the problem to be solved or the opportunity to be exploited based on opinions and predictions collected from the crowd and task definition. This corresponds to intelligence phase of Simon’s process model; • Task broadcasting to the crowd, which is performed, in the majority of cases, in the form of an open call. The crowd may be composed of either enterprise employees (as a modern form of the traditional “suggestion box”) or customers and/or other outsiders; • Idea generation by the crowd in the form of various action alternatives; It basically corresponds to the design phase of Simon’s process model. The ideas are viewed as cooperative, when they contribute to accumulating knowledge. They can also be of a competitive nature when they serve to find solutions to problems; • Evaluation of ideas by the same members of the crowd that generated the idea or by another crowd or group of experts; • Choosing the solution through a voting mechanism.

3.2 Fundamentals of Social Choice

Social choice theory (SCT) is a relatively old research topic that can provide useful insights into how to aggregate the individual inputs into an output that reflects as accurately as possible the collective view of the whole group. A brief introduction to the SCT will be given in Sect. 3.2.1 to highlight the main challenges that are met when a particular aggregation method is used to construct the collective view of the group from the individual preferences. In the next section, these challenges will be discussed in the context of some basic aggregation mechanism employed in any Group decision support system (GDSS), namely the voting system. Section 3.2.2 3.2 Fundamentals of Social Choice 81 contains a review of several most frequently used voting rules and their inherent particularities that make them more or less adequate for a specific problem. In the next section we summarize the major axiomatic characterizations of these rules and their immediate consequence on displaying paradoxical results. When these voting rules are supported by a GDSS, there are additional concerns, usually overlooked in the classical SCT, such as: access to voting results during the voting process, frequency of voting, dynamic listing of alternatives etc. These concerns will be synthesized in Sect. 3.2.4 where the practical use of a voting rule is analyzed from the extended perspective of exploiting some common functionalities offered by a GDSS.

3.2.1 Aggregating Individual Preferences

In the Stanford Encyclopedia of Philosophy, social choice theory is defined as “the study of collective decision processes and procedures … concerning the aggrega- tion of individual inputs into a collective output” (Social Choice Theory 2013). Collective decision making constitutes the aggregation of individual views of the group’s members into a collective decision that reflects, as appropriately as pos- sible, the collective view of the whole group. Different aggregation mechanisms, particularly in the general framework of voting systems, have been developed and debated upon even since Athenian democracy. Nowadays, this problem is almost pervasive in any of the collaborative technologies that we are using when a group of heterogeneous entities (humans or artefacts) are involved in making collective decisions. Multi-agent systems, cyber-physical systems, social networks and crowdsourcing, problems in e-commerce and e-governance are well-known research fields that imply the aggregation of multiple perspectives. To illustrate the problem of aggregating the individual preferences we can review the example given in Sect. 1.5.1. The collaborative decision making (CDM) problem concerns the selection of the approach to be used in constructing an integrated application for the current information system available in the com- pany. Section 1.5.2 contained several possible rules for aggregating the preferences of the people involved in the decision-making process. They took into account the decision powers of participants determined by their legal and/or rational position. Let us assume the simple majority rule where the participants have equal decision powers, we have nine decision-makers (nd = 9), and, as defined in Sect. 1.5.3, three alternatives to design and implement a new information system: A1, A2, and A3. If, for one of the four objectives defined in Sect. 1.5.3, the distribution of preferences among participants is:

• four participants have the preference order prefer A1 A2 A3, • three participants have the preference order A3 A2 A1, • two participants have the preference order A2 A3 A1, 82 3 Collaborative Activities and Methods then their preferences can be easily aggregated in a preferences table for each group of individuals sharing a common view (Table 3.3). To select the winning alternative, we need an aggregation mechanism to com- bine the dissimilar views among participants. A detailed description of the aggre- gation mechanisms employed in this illustration will be given in Sect. 3.2.2. Here we highlight only the dissimilar outputs that may arise when using different approaches to aggregate the individual views. For example, if we consider how often an alternative is ranked first (the plurality rule), then the winning alternative is A1. But following this rule, five decision-makers against four will be dissatisfied with this choice as long they prefer A2 or A3 to A1. Alternatively, if each alternative receives, for each decision-maker ranking, a number of o points equal with the number of alternatives minus the place where this alternative is ranked (the Borda rule), then the winning alternative is A2. Moreover, if we eliminate the alternative that is ranked first by the lowest number of individuals (the single transferable voting rule), then A2 will be eliminated first, and, from between the remaining alternatives, A3 will win, and A1 is eliminated since is ranked first by a lower number of decision-makers (four against five). As it may be noticed, the rule used to aggregate the individual views results in dissimilar outputs, and consequently plays a role at least as important as the indi- vidual preferences of the group members. The first influential work on the devel- opment of social choice theory dates back to 1785, when the French philosopher Jean Antoine Nicolas de Caritat, marquis of Condorcet, published two fundamental results (Condorcet 1785): Condorcet’s Jury Theorem and Condorcet’s Paradox. Condorcet’s Jury Theorem deals with the relative probability of a group making the correct decision under simple majority rule (see Sect. 3.2.2.1) and is one of the earliest mathematical proofs for crowd voting (Sect. 3.1.4). The theorem assumes that the correct decision of selecting one out of two alternatives is unknown by the decision-makers. It states that, when the average probability of an individual making the correct decision exceeds 50 %, the probability of the group as a whole making the correct decision asymptotically converges to 1 as the size of the group increases. Condorcet’s paradox indicates a situation in which collective preferences can be cyclic even if the preferences of individuals are not. This is paradoxical, because it means that the collective decision is not rational or intransitive. As a result, the aggregation method that always avoids this paradox is called a Condorcet winner: if an alternative defeats every other alternative in pairwise comparison by a simple majority of votes, then this alternative should be selected as the winning alternative. In field literature, there are many aggregation methods that adhere to the Condorcet principle, commonly called Condorcet extensions (see Sect. 3.2.2.2).

Table 3.3 Preferences nd432 distribution among nine A A A participants for the three Preferences 1 3 2 A A A alternatives 2 2 3

A3 A1 A1 3.2 Fundamentals of Social Choice 83

The first formal approach to the problem of aggregating the individual prefer- ences into a collective view was made by in his doctoral thesis “Social Choice and Individual Values” (Arrow 1963). His seminal work is unan- imously considered the foundation for the modern Social Choice Theory for studying the aggregation mechanisms for Collective Decision Making. In his famous Impossibility Theorem, he proved that, for at least three alternatives, there is no aggregation mechanism that simultaneously satisfies three basic axioms: • unanimity (or Pareto-optimality): if every individual agrees that alternative A is better than alternative B, then the collective preference order should also reflect that A is better than B; • the independence of irrelevant alternatives: if the collective preference states that alternative A is better than alternative B and someone changes his prefer- ence with respect to another alternative C, then the collective preference should still prefer A to B; • is non-dictatorial: there is no individual who can dictate his preference in the ranking of alternatives, despite the preferences of the other individuals. The key merit of his theorem was the use of an axiomatic framework for ana- lyzing different aggregation rules to demonstrate that a certain aggregation rule satisfies or not a set of desired conditions or axioms (i.e. normative principles for aggregating the preferences in CDM). By following this axiomatic practice, the SCT contributes two broad classes of theorems that differentiate a particular aggregation rule. These are either characterization theorems—demonstrating that a particular aggregation rule is the only one that satisfies a specified set of axioms, or impossibility theorems—demonstrating that there is no aggregation rule that satis- fies a specified set of axioms. After a review of some of the most frequently used aggregation rules (see Sect. 3.2.2), in the subsequent section, we will summarize the major axiomatic characterizations of these rules and their immediate conse- quence on displaying paradoxical results.

3.2.2 Voting Mechanisms

The many voting rules described in literature are frequently grouped in three main classes: • scoring rules that satisfy the reinforcement property; • Condorcet extensions that satisfy the Condorcet consistency; • other rules that do not fit in either of the first two classes. Here we will briefly discuss some of the common rules that are used in practice to aggregate the individual preferences, while their axiomatic characterization will be given in the next section. Basically an aggregation rule defines a structure to express the individual preferences for a set of alternatives and a method to compute 84 3 Collaborative Activities and Methods their aggregation for obtaining one or more winning alternatives. This structure is typically called “ballot” in voting and may take different forms, such as the name of a single selected alternative or the ranking of alternatives.

3.2.2.1 Scoring Rules

Due to their simplicity, scoring rules are among the most-widely used voting rules to reach common decisions in practice. These rules choose the winning alternative based on a computed score, such as the maximum sum of points of each individual preference for a specific alternative. The vast majority of GDSS include specialized tools to support voting activities that employ scoring rules. In this class we have: • Plurality rule The plurality, or simply majority rule, is maybe the most widely used voting procedure in practice. It assumes that each decision-maker selects and submits a single, the most preferred, alternative out of the total set of na alternatives. As a result, each alternative will get a number of points equal with the number of decision-makers who ranked it first. The winning alternative will be the alternative receiving the highest number of points. When the winning alternative exceeds a certain threshold, usually 50 %, the rule is commonly called the majority rule. Because the preferences are obtained only for the most preferred alternative, the rule ignores the distribution of preferences over the remaining, unselected alternatives. Therefore, the rule is considered to be appropriate when the na is 2, otherwise it may be viewed as encouraging the individuals to submit a strategic or insincere vote when the preferred alternative does not have a real chance of winning. To diminish this problem, a frequently used variant of this rule is to use successive rounds (plurality with run-off) when the winning alternative is chosen in the second (or third) round by using the plurality rule with the top two (or three) alternatives from the previous round(s). Nevertheless, the plurality rule with run-off does not eliminate the no-show paradox of being better to abstain than to vote for the favourite alternative. • Borda’s rule The voting procedure proposed by Jean-Charles de Borda (1781) presumes that each decision-maker submits a complete ranking of all na alternatives. The rule counts the sum of all the points obtained by each alter- native from each decision and selects the winning alternative with the highest number of points. A certain alternative gets x points from a decision-maker if that alternative is preferred to all other x alternatives. Thus, for each decision-maker that places an alternative first, that alternative receives na − 1 points, for each decision-maker that places it second it receives na − 2 points, and so forth. Because Borda’s rule elicits the preferences over the entire set of alternatives, it tries to compensate for the disadvantages of plurality rule on the expense of cognitive load and communication costs. A variant of Borda’s rule meant to lessen its inherent complexity when the decision-maker is facing a large set of alternatives is to restrict the ranking to a limited number of k < na 3.2 Fundamentals of Social Choice 85

alternatives. This is known as Truncated k Borda Count, and is equivalent to a conventional plurality rule when k is 1. • Approval rule Approval or acceptance voting was analyzed for the first time by Brams and Fishburn (1978) and presumes that each decision-maker selects a subset of acceptable alternatives. Contrary to other voting rules, it does not enforce a linear order over the set of alternatives. Each alternative gets a number of points equal with the number of decision–makers who approved the alter- native and the alternative with the highest number of approvals wins. When more alternatives get the same number of points, an additional tie breaking procedure should be used. This rule does not require to rank the alternatives or to give more than one vote to an alternative; therefore it is seen as a compromise between the plurality and the Borda rules in terms of preference elicitation and communication costs.

3.2.2.2 Condorcet Extensions

All the scoring rules mentioned before violate the Condorcet principle (see Sect. 3.2.1). Therefore, the aggregation rules that satisfy the Condorcet principle are considered to be a special class of voting rules that select the Condorcet winner if it exists. As a general framework for these rules, the scoring for an alternative is computed on the basis of its pairwise comparison with any other alternative, a series of successive contests between every candidate alternative against every other alternative. Even if these rules are considered very fair, the consequence of pairwise comparisons for a large number of alternatives is time-consuming. To alleviate this problem, there are many voting variants supposed to work for a large number of alternatives, such as the Debian Voting System (Debian 2016)orCondorcet-fuse algorithm (Montague and Aslam 2002). Moulin (1988) demonstrated that all these rules are vulnerable to the no-show paradox. Next, we will describe the most significant aggregation rules belonging to this special category, namely: • Copeland rule or pairwise comparisons rule chooses as winning alternative the one that wins the most pairwise majority comparisons. In its standard form an alternative gets a point for every pairwise majority win, and some fixed number of points, between 0 and 1, for every draw. A common variant is to compute the score for an alternative as the number of wins minus the number of losses. In any case, the winning alternative is the alternative with the highest number of points. Obviously, when using any of these scoring methods there is a proba- bility of ending up in a tie for some winning alternatives. This probability is increasing as the group is larger and/or the alternatives are fewer. In this case an additional mechanism is needed to solve the discriminations among the winning alternatives. The Copeland rule does not necessarily lead to a majority prefer- ence winner, but captures the relative support between alternatives. • or Simpson-Kramer rule (Kramer 1977; Simpson 1969) orders the alternatives according to their minimal ranks according to the decision-makers’ 86 3 Collaborative Activities and Methods

preferences. The number of alternatives should be at least three and the pref- erences are described in linear order. Each alternative gets a number of points equal to the number of decision-makers who prefer the alternative in its worst pair-wise contest or duel. Then, the alternative with the highest minimax score wins. Therefore, it is considered to be a pessimistic aggregation since it chooses the best of the worst alternatives. For the example from Sect. 1.5.1, the results of the pairwise scores may be tabulated as represented in Table 3.4, where alter- native A2 wins. • Ranked pairs rule developed by Nicolaus Tideman (1987) generates a ranking of all alternatives and selects the first-ranked alternative as the winning alter- native. The ranking sorts all pairwise comparisons by the magnitude of the margin of victory. The magnitude is computed as the number of people who preferred an alternative over another. To discriminate among equal magnitudes, an additional score is computed, called margin of victory. It is the number of people who favour the alternative A over B minus the number of people who favour the alternative B over A. For example, in Table 3.4, the preference for A3 over A2 has the strength 6 with margin 3. In the same table, the preference for A2 over A1 has the strength 5 with margin 1, and the preference for A3 over A1 has the strength 5 with margin 1. If two or more pairwise defeats have an equal strength and margin, then they are considered equivalent as is in our case: A2 over A1 and A3 over A1. Anyway, in this case, A3 is the winning alternative since it is first-ranked and there is no preference cycle.

3.2.2.3 Other Rules

Many GDSS employ voting rules that do not fit in any category mentioned before. Two of the most known voting rules are: • STV (single transferable vote), or the Hare (1859) system, wherein the decision-makers rank the alternatives according to their preferences. In a simple version, the alternative with the lowest number of decision-makers that rank it first is eliminated in successive na − 1 rounds until the last remaining alter- native will win. At the same time, the votes for the eliminated alternatives are transferred to the next remaining alternatives. The rule does not necessarily entail to rank all alternatives, non-ranked alternatives getting implicitly the lowest rank. There are a number of different variations for the STV aggregation, including the original version proposed by Hare, with respect to how the votes for the eliminated alternative are transferred to another alternative.

Table 3.4 Preferences A1 A2 A3 Min distribution among nine A – participants for three 1 444 A – alternatives 2 5 65

A3 53– 3 3.2 Fundamentals of Social Choice 87

• Bucklin’s rule is known as the majoritarian compromise and is a class of aggregation rules that according to Haines and Hains (1921) was first proposed by Condorcet and promoted later on by James W. Bucklin. In its simplified version, the decision-makers rank the alternatives according to their preferences and the winning alternative will be that one with the highest median ranking. If the first–ranked alternative is voted by more than half of the decision-makers, then the alternative wins. Otherwise, the second-ranked alternative is added to the first one, checking again if there is an alternative with a majority to be selected as the winning alternative. The process continues by adding the next ranked alternative until some alternatives receive more than half of the votes. When there are more than one alternative with a majority, the alternatives are discriminated with respect to the margin by which they cross the majority threshold. The more complex versions of this rule consider either changing the majority threshold or limiting the number of alternatives that can be ranked. In the example from Table 3.4, no alternative receives a sufficient number of first-ranked alternatives (5 votes). If the second-ranked alternative is added, namely A3, it will cross the threshold and, consequently, will win.

3.2.3 Axioms and Paradoxes

There are many desired properties defined in Social Choice Theory to characterize an aggregation rule. Zwiker (2016) classifies them into three basic categories according to their strength. The first category includes the basic requirements for undesired results. These are: • anonymity: if two decision-makers change their ballots, then the winning alternative does not change; • neutrality: if the decision-makers that previously preferred A over B and now prefer B over A, and vice versa, then the collective order must also be reversed; • unanimous, consensual or Pareto property: if alternative A is preferred by everyone to B, then the collective order should also rank A above B. As may be noticed in Table 3.5 inspired from Brandt et al. (2016), these axioms are satisfied by almost all the aggregation rules, but, in some contexts, these axioms are completely irrelevant. For example, when the members of the board of a company that wishes to add a new internal regulation is often neither anonymous, nor neutral. Similarly, any voting system in which some voters have different decision powers as exemplified in Sect. 1.5.4 is not anonymous. Moreover, when collective order is not resolute (there is a single winning alternative), then the main approaches for tie-breaking (i.e. fixed-order of alternatives, designated decision-maker to break ties, using an additional voting rule using an additional voting rule, etc.) are violating these axioms. A classical theorem within respect to this issue was given by Moulin (1983) and states that for a number of alternatives 83ClaoaieAtvte n Methods and Activities Collaborative 3 88

Table 3.5 The axiomatic characterization of some common voting rules Categories Voting Axioms rules Anonymous Neutral Pareto Monotonicity Reinforcement Condorcet IIA Strategy-profeness Scoring rules Plurality Y Y Y Y Y N N N Borda Y Y Y Y Y N N N Approval Y Y Y Y Y N N N Condorcet Copeland Y Y Y Y N Y N N extensions Minimax Y Y Y Y N Y N N Ranked YYYYN YNN pairs Other rules STV Y Y Y N N N N N Bucklin Y Y Y Y N N N N 3.2 Fundamentals of Social Choice 89 na strictly greater than 2, and a number of decision-makers nd divisible by any integer between 1 and na, then there is no resolute aggregation rule which is anonymous, neutral and consensual. Consequently, even if the axioms from this category are satisfied by any aggregation rule, the problem of tie-braking needs to be carefully addressed. The second category of axioms enforces some additional constraints that may look controversial for some particular contexts. These are: • weak monotonicity: if alternative A wins, and a decision-maker increases his vote in favour of this alternative, then it still wins; • reinforcement: if we split the decision-makers into two sub-groups and an alternative would win in both sub-groups, then it will also win for the initial group; and • Condorcet consistency:ifA is a Condorcet winner then A should be selected as winning alternative (see Sect. 3.2.1). The above axioms cannot be simultaneously satisfied and therefore the most important one should be selected (see Table 3.5). Some classical theorems that prove their incompatibility are formulated by Kenneth May (1952), John Smith (1973) or the Nobel Prize winner Amartya Sen (1984). The axioms from this category deal with the situation when either the individuals modify their preferences (i.e. monotonicity) or the number of individuals changes (i.e. reinforcement). Alternatively to monotonicity, in SCT, a stronger notion, namely positive respon- siveness (if alternative A wins, and a decision-maker increases his vote in favour of this alternative, then A becomes the unique winning alternative) is frequently used. An additional axiom in this group is Condorcet consistency. Condorcet consis- tency and reinforcement are important axioms that are hard but not impossible to satisfy, and they each identify a big and important class of aggregation rules discussed in the previous section (the Condorcet extensions and the scoring rules, respectively). The third category of axioms is considered the strongest set of constraints that are nearly impossible to satisfy with any aggregation rule, except some certain assumptions. These are: • independence of irrelevant alternatives (IIA), as defined in Sect. 3.2.1; • strategy-proofness: voting rules that are non-manipulable. If the independence of irrelevant alternatives is covered by the Kenneth Arrow’s Impossibility Theorem for non-manipulability case, the major theorem was given independently by Gibbard (1973) and Satterthwaite (1975) who proved the impossibility of formulating a non-dictatorial voting rule that is immune to strategic manipulation. Often individuals may submit preferences that do not reflect their true view with the aim to maximize the chances for a certain result. Nevertheless, strategic manipulation presumes knowledge about the preferences of other voters or their possible engagement in strategic voting and coalitions. Assuming this knowledge is accessible, someone needs to compute the manipulative strategy. Therefore, the computational complexity of finding the effective strategy is 90 3 Collaborative Activities and Methods considered a barrier to manipulation. For example, Conitzer and Walsh (2016) synthesise the computational complexity required to manipulate different voting rules by various numbers of manipulators with constructive or destructive inten- tions. From the voting rules described in Sect. 3.2.2, the voting rules most difficult to manipulate are Copeland, STV, and Ranked Pairs. The immediate consequence of a voting rule not holding a certain axiom is the possibility to yield unintuitive or paradoxical results. Starting with Condorcet’s paradox exemplified in Sect. 3.2.1, there is a large set of voting paradoxes identified in SCT. Felsenthal (2010) classifies these paradoxes into two broad classes: simple —when the design variables of a vote lead directly to unintuitive results; and conditional—when modifying one of the many possible variables of a vote, while keeping all the others constant, leads to unintuitive result. These variables may be: the group size, the number of alternatives, the way of ranking the alternatives (i.e. cardinal or ordinal), the availability of intermediate results during the voting pro- cedure, the number of winning alternatives, tie-braking procedures, etc. The simple voting paradoxes are a direct effect of not complying with the axioms discussed before (i.e. Condorcet paradox, Condorcet winner paradox or Pareto paradox). The same is true for the conditional paradoxes, such as: (a) additional support paradox (Smith 1973) for the lack of monotonicity, (b) multiple districts paradox (Young 1974) for the lack of reinforcement, and (c) violation of the subset choice condition Fishburn (1977) for the lack of IIA. Other paradoxes concern: (a) the sincerity of expressing the true preferences, such as truncation paradox (Fishburn and Brams 1983) when a voter may obtain a better if he votes honestly only for a subset of alternatives and not for the entire set, (b) the partic- ipation in the voting process, such as no-show paradox (Fishburn and Brams 1983) when a voter may obtain a better outcome if he decides to not take part in the voting process rather than to vote honestly; (c) agenda settings, such as the path depen- dence paradox (Plott 1973) when the selection of the winning alternative displays different results if it is done in subsequent steps over subsets of alternatives rather than the entire set of alternatives.

3.2.4 Implications for Group Support Systems

In Collective Decision Making different forms of voting are used almost in any phase of the decision-making process, not just in the “choice” phase of the classical decision-making model (see Sect. 2.1.1) as one might hastily think. They are used in any activity where the resulted deliverable (or products) consists in a set of adopted ideas or concepts. Indeed, to establish the set of decision objectives, to distribute and allocate the tasks among participants, to adopt a specific approach or model(s) (see Sect. 1.5.2) or to build consensus require a collective view for all these relevant issues. At the end, this collective view is built step by step by using a voting method for all the key elements of a decision-making process. In addition to the formal aggregation of different views, a voting implies the configuration of 3.2 Fundamentals of Social Choice 91 many procedural settings, such as: (a) the time to call the vote, (b) the way of expressing the views, (c) stop conditions, (d) length of the pooling process, (e) rules to interpret the results, and so on. This issue will be discussed in more detail in Sect. 3.4 in the context of collaboration engineering. Each voting method presumes the analysis of a number of alternatives and filling out the ballot in a relevant structure. Some voting methods require more effort for ranking all the competing alternatives, whereas other methods require less effort to select just one alternative. This effort has two-side facets: (a) the cognitive effort to analyze the number of alternatives, and (b) the physical effort to fill out the cor- responding data structure. The discrimination degree among alternatives has a significant impact over the cognitive effort required of the decision-maker to communicate more or less about his view. On the one hand, this is related to the number of possible combinations imposed by the voting method and, on the other hand, to the number of available alternatives. For instance, while in plurality vote, there are n possible combinations to express the preference, in approval voting, there are 2n combinations, and in Borda’s method one can count n! possible combinations for a number of n alternatives. The cognitive effort required to ana- lyze the alternatives is the ease of understanding how the winning alternative is selected. To promote an active and honest contribution, the participants should understand how their preferences are aggregated into the collective output. For instance, the plurality aggregation rule is easier to rationalize than an aggregation rule that requires complex calculations such as Borda’s method. The physical effort relates to the number of actions required to express a vote. A decision-maker may lose his interest to express his view if the number of actions exceeds a certain threshold. This involuntarily leads to either the truncation para- dox, when the decision-maker expresses his view only for those alternatives he considers important and neglects the others, or no-show paradox, when the decision-maker is not willing to participate anymore. Gavish and Gerdes (1997) define five levels of ballot complexity with respect to the required physical effort for alternatives discrimination: • The first level: the decision-maker selects only the highest valued alternative (i.e. plurality voting); • The second level: the decision-maker splits the alternatives into the acceptable set and the unacceptable one (i.e. approval voting); • The third level: the decision-maker ranks a limited number of acceptable alternatives (i.e. multiple vote); • The fourth level: the decision-maker is ranking the entire number of alternatives (i.e. Borda count); • The fifth level: the decision-maker is quantifies the preferences for the entire alternative set (i.e. average score method). Obviously, this classification becomes less relevant when the number of alter- natives is small. But when we face a large number of alternatives and a large number of participants, they will automatically lead to the difficulty of computing 92 3 Collaborative Activities and Methods the aggregated output. Therefore, some authors are investigating the possibility to design suitable protocols that elicit partial preferences in sequential order. For example Conitzer and Sandholm (2005) show that some of the voting methods discussed in Sect. 3.2.2 can be computed when associated with lower communi- cation overhead. On the other hand, in groups in which members rank alternatives, the participants need to exchange more information than in the case of groups in which members only have to select one alternative. The expressive power of the preference’s structure enables a group to use fewer or more rounds to reach a final decision. Consequently, as Cheng and Deek (2012) remarked, the net effect of this communication complexity for preferences elicitation in a voting method on time to reach a decision on time is not clear. A Group Decision Support System (GDSS) may be configured to show the voting results at any time during the voting process or only after the voting process is ended. An open access to the voting preliminary results may influence the par- ticipants who have not yet cast their vote. On the other hand, the open access to the voting results during an on-going voting procedure may enable the early argu- mentation of some controversial issues and consequently a faster agreement among participants. The possibility to access the voting results during the voting process is the additional support from a GDSS to change the set of alternatives, the group composition or even the intended vote. Adding and deleting alternatives during the voting procedure may improve the quality of the decision as long as these changes are based on the discussions over the previous set of alternatives. But, on the other hand, it increases the effort to elicit again the individual preferences. Therefore, we have a trade-off between storing and reusing the results from the first rounds and the complexity of eliciting new preferences in the following round. For example, Chevaleyre et al. (2011) address from this perspective the design of suitable pro- tocols that specify how to compile the information provided in the first stage and how to elicit the information to be communicated in the second stage.

3.3 Further Extensions from Social Choice Theory to Group Decisions

In modern applications, the voting rules are not exclusively used to aggregate the individual preferences of a group of people, but are equally employed to aggregate judgments, to allocate resources or to aggregate arguments in structured commu- nication frameworks. The further possible employment of some relevant results from Social Choice Theory (SCT) in other application domains will be discussed in this section. If the voting theory discussed in Sect. 3.2 concerns mainly the “choice” phase from the classical model of decision-making, there are many extensions derived from SCT that provide useful insights equally applicable to the “design” phase. Even if they are not traditionally employed in Collaborative Decision Making (CDM) as voting, these extensions already have a significant 3.3 Further Extensions from Social Choice … 93 impact on the development of useful tools that enforce a certain level of rationality in the decision-making process. This section presents the most notable develop- ments from SCT that are employed in knowledge-driven DSS, namely: judgment aggregation (Sect. 3.3.1), resource allocation (Sect. 3.3.2) and group argumentation (Sect. 3.3.3).

3.3.1 Judgment Aggregation

Judgment aggregation is an extension of the classical preference aggregation problem when we need to reach simple “yes” or “no” judgments on a set of interconnected logical propositions that can be evaluated to be “true” or “false”.In this case the goal is to aggregate the individual judgments of individual experts and not to select one from a set of mutually exclusive alternatives. The basic assumption in this case is to acknowledge that “deciding” is not always an individual process but in many cases a collective one. While the aggregation rules discussed in the previous section are focused on the “implementation” phase of the classical deci- sion process (see Sect. 3.2.2), judgment aggregation looks into the reasons that are behind the individual choices when the individuals are expected to process complex information and arrive at a final choice in a collaborative, not individual, way. Indeed, many real-life decisions imply collecting opinions on different issues and not just ranking a set of complex alternatives. Therefore the formal study of judgment aggregation is considered by many authors to hold significant potential for some applications of collective decision-making, such as reaching agreements in mixed human-agent teams, merging and translating dissimilar ontologies that reflect individual experiential knowledge, and collective annotation of different types of data (Endriss 2016). Unlike the old problem of preferences aggregation, judgment aggregation has been only recently studied in a formal way. List and Pettit (2002, 2004) used for the first time a formal model with a logical representation of propositions to study the aggregation mechanisms of experts’ judgments. Similar to voting, they show that paradoxical circumstances may happen in judgment aggregation as well when the aggregation of consistent individual judgments can generate logically inconsistent collective results. This problem is known in SCT as doctrinal paradox when there is an external doctrine responsible for the problem (Kornhauser and Sager 1993) or, more generally, discursive dilemma (List and Pettit 2002) that generalizes Condorcet’s paradox of voting (see Sect. 3.2.1). If we recall the example given in Sect. 1.5.3, a plausible doctrine may be that the adoption of a more flexible solution requires the company designing the system from the software pieces available on the market and, at the same time, exploiting the services provided by the cloud computing companies. Note that in this case the alternatives are not necessarily mutually exclusive. More formally, suppose the decision-makers (Mr. X, Ms. Y and Mr. W) need collective judgments on four propositions: 94 3 Collaborative Activities and Methods a the company can design the system from the software pieces available on the market; b there are relevant and trustworthy cloud-based services available on the market; o adopt a more flexible solution; (a ^ b) $ o using in the right combination the software pieces and the cloud-based services available on the market is necessary and sufficient for adopting a more flexible solution. Here, the logical symbol “$” means “if an only if”, while “^” is the logical conjunction. In SCT, such a set of propositions on which the collective judgments are made is called “agenda” and it plays a significant role in characterizing the judgment aggregation rules. For example, Endriss (2016) provides a state of the art review on the axiomatic characterization of agendas. Even if Mr. X, Ms. Y and Mr. W accept the common doctrine, that is (a ^ b) $ o, they may disagree on the three propositions (a, b,ando) as represented in Table 3.6. To illustrate the more general problem of discursive dilemma, we can simplify this example and consider only the following proposition: If the company can design the system from the software tools available on the market, then it adopts a more flexible solution (a ! o). As may be noticed in Table 3.7, each expert holds a consistent set of judgments:

Table 3.6 Doctrinal paradox ab(a ^ b) $ oo Mr. X Y Y Y Y Ms. Y N Y Y N Mr. W Y N Y N Group using majority rule Y Y Y N

Table 3.7 Discursive dilemma and some aggregation rule aa! oo Mr. X Y Y Y Ms. Y N Y N Mr. W Y N N Group using majority rule Y Y N Group using premise-based rule Y Y Y Group using conclusion-based rule ––N Group using quotas rule N (for ¾ quota) N (for ¾ quota) N (for ¾ quota) Group using a distance-base rule tie tie tie 3.3 Further Extensions from Social Choice … 95

• Mr. X believes the company can design the system from the software pieces available on the market (a) and this is a condition for flexibility (a ! o), and consequently will adopt a more flexible solution (b); • even if Ms. Y believes that using the software pieces available on the market will lead to a more flexible solution, she is not convinced that it is feasible to reach the requirements with what is available on the software market and, therefore, the designed system will not be as flexible as intended; • Mr. W believes the company can design the system from the available software, but does not consider it to be a requirement for a flexible solution and for that reason no more flexible solution will be adopted in the end. If we analyze this very simple set of interconnected propositions, even if each individual judgment is logically consistent, when we apply the majority rule to all propositions we will yield a collective judgment that is inconsistent: the majority believes the company can design the system from the software pieces available on the market (a is true) and this lead to a flexible system (a ! o is true), but the majority believes in no flexible solution (o is false). In SCT, there are described a number of different aggregation rules to deal with such situations. According to Endriss (2016), the most common rules are: the premise-based rule, the conclusion-based rule, quota rules, and distance-based rules. Each of these rules comes with some intrinsic properties that will be syn- thesized after their brief discussion and exemplification. The premise-based rule presumes first the classification of the propositions (i.e. a, a ! o, and o) into premises and conclusions. This classification is basically a subjective matter, but normally the premises are the propositions that are the pre- conditions for other propositions, i.e. a and a ! o in our case. The rule consists in taking a majority vote only on each premise and in deciding the conclusion by deduction. Consequently, the premises are collectively acknowledged if and only if they are confirmed by the majority. In Table 3.7, the premise-based rule requires that a and a ! o to be admitted by the group and therefore to confirm o. Dietrich and Mongin (2010) mention several common problems with this aggregation rule: 1. it is sensitive to the selection of premises and therefore subject to manipulability (in addition, it is not always possible that the propositions can be clearly clas- sified into premises and conclusions); 2. the premises do not necessarily determine the conclusion (e.g. if the collective judgment results in ¬a and a ! o, the conclusion o cannot be determined). The conclusion-based rule, similarly to the previous rule, presumes the clas- sification of propositions into premises and conclusions. However, under this rule, the conclusion (o) is decided by a majority vote on o, neglecting the collective judgment on premises. Consequently, the rule does not provide any justification for the collective judgment. As in the case of the premise-based rule, the conclusion-based rule is sensitive to the biased classification of propositions into premises and conclusions. 96 3 Collaborative Activities and Methods

Quota rule replaces the majority with well-calibrated acceptance thresholds or quota. In this framework the group will confirm a proposition if and only if at least a proportion of the group, corresponding to its assigned quota, affirms this proposi- tion. If in our example, we have ¾ quota for {a and a ! o}, and ¼ quota for o, then the group will negate each proposition (Table 3.7). Dietrich and List (2007) have shown that even if the quota rule may guarantee a non-manipulative outcome, it may be easily biased to favour or to reject a certain proposition when a small or high acceptance quota is used. In any case, the quota rule requires a systematic investigation on how the propositions are logically interconnected to find the right thresholds for a consistent collective judgment. Distance-based rules try to find as a collective judgment a set of propositions that minimize the distance to the individual judgments. That implies searching into all complete and consistent sets of propositions that may result from the combi- nation of individual judgments and finding the combination which minimizes the sum-total distance to the individual judgments. A very simple way to quantify this distance is to count the number of propositions on which two individuals disagree (i.e. Hamming distance), but many others may be defined to quantify the distance between the individual judgments. For the individual judgments from our example, there are three collective judgments, namely: {a, a ! o, o}, {¬a, a ! o, ¬o}, and {a, ¬(a ! o), ¬o} that solve this constrained minimization problem, any of these sets having a sum-total distance to the individual judgments equal to four. As may be observed, the distance-based aggregation rules usually need to be combined with a tie-breaking rule to find a single collective judgment. They are also sensitive to how the distance between the individual judgments is quantified, but do not need a classification of propositions into premises and conclusions or to find the accep- tance thresholds like the others rules. As may be noticed from the above exemplification of some judgment aggre- gation rules, no one can guarantee a consistent collective judgment as a function of individual judgments. For that reason, the work in SCT employs axiomatic methods to establish possibility or impossibility results in judgment aggregation. It intro- duces some desirable conditions (axioms) that a judgment aggregation rule should satisfy, such as (List and Pettit 2002): (a) anonymity (all individuals are treated symmetrically), (b) neutrality (all propositions are treated symmetrically), and (c) independence (the collective decision regarding a proposition depends only on the individual judgments regarding that proposition). Neutrality and independence together are often referred to as systematicity. In Table 3.8 are synthesized— inspired from Endriss (2016)—the axioms that characterize the main aggregation rules presented in this section, where the collective rationality presumes the con- sistency and the completeness of the collective outcome. If the distance-based rule is collectively rational in general, the other rules satisfy the collective rationality axiom as well for some specific agendas (i.e. the way in which the propositions are logically interconnected). For instance, if the agenda contains only propositional variables, without logical operators, then the majority rule will be consistent too. Therefore the work in SCT usually combines the axiomatic characterization of judgment aggregation rules with agenda properties. 3.3 Further Extensions from Social Choice … 97

Table 3.8 Judgment aggregation rules and their axiomatization Aggregation rule Anonymity Systematicity Collective rationality Majority rule Y Y N Premise-based rule Y N Y Conclusion-based rule Y N N Quota rule Y Y N Distance-based rule Y N Y

The above examples of judgment aggregation rules show very clearly that even for a minimal reasoning process the group judgment cannot be simply derived from the individual judgments. Therefore, tradeoffs between different axiomatic prop- erties are always employed in strong connection with the context of the decision problem. Many issues for characterizing this context, such as different competence levels of decision-makers, incomplete set of judgments etc., are nowadays explored in SCT to better frame the applicability of this analytic and normative work.

3.3.2 Resource Allocation

Resource allocation is a frequent problem that requires collective decision making. Many types of common resources (i.e. time, financial, human, computational, material etc.) are allocated in a collaborative way to reconcile divergent views and satisfy welfare or fairness criteria. This problem arises in many real-world settings when a group needs to divide business profits or assets, natural or artificial resources, workforce etc. In SCT, resource allocation is seen as a collective decision-making problem of aggregating the individual preferences for an allocation alternative of resources amongst the members of a group. The resources may be one single divisible resource (homogenous or heterogeneous) or multiple indivisible resources (single unit or multiple units, sharable or not). Because the solution space for this problem is huge even for a small number of resources, for example, allocating five resources to five individuals means 55 pos- sible allocations, it is seen as a combinatorial optimization problem of an NP-complete class. Indeed, most welfare optimization criteria (see Table 3.9) for resource allocations are NP-complete, as proved by Chevaleyre et al. (2006). Despite its complexity, some feasible approaches for resources allocation are still possible. SCT promotes two basic approaches to tackle this problem in collective settings: • centralized, when the complete individual preferences of the group are com- municated to a mediator who tries to find a satisfactory solution that maximizes a certain criterion. In this case, the mediator first elicits the individual prefer- ences, and second employs a stand-alone DSS (i.e. mixed linear programming tool) with a predefined mathematical model to solve the problem; 98 3 Collaborative Activities and Methods

Table 3.9 Criteria for evaluating the resource allocation in group decision making Type Criteria Description Efficiency Pareto-efficiency An allocation is Pareto-efficient if there is no other 0 0 feasible allocation A such that uiðAÞuiðA Þ8i 2 N, and this inequality is strict for at least one case. In other words, there is no other better allocation for an individual without it being worse for another one Utilitarian social The utilitarian allocation tries to maximizeP the overall welfare group’s utility, therefore to find max ð uiðAiÞÞ8i 2 N i Nash social welfare Similar to utilitarian social welfare, the Nash social welfare increases the overall group’s utility, but tries to minimize the inequality amongQ the allocations for each participant by finding the max ð uiðAiÞÞ; 8i 2 N i Fairness Egalitarian social An allocation is egalitarian when the utility of the welfare (maxmin) “poorest” individual is as high as possible. In this case the allocation is tries to maximize the min uiðAiÞ8i 2 N Envy-free An allocation is envy-free if no participant prefers somebody else’s assigned resources to him: uiðAiÞuiðAjÞ8i; j 2 N Proportionality An allocation is proportionally fair if each of n individuals get from the allocation at least one nth part P of the total utility: uiðAiÞ uiðAiÞ n8i 2 N i Equitable An allocation is equitable if all participants are equally happy with the outcome of the allocation: uiðAiÞ¼ujðAjÞ8i; j 2 N

• decentralized, when the individuals solve the problem in an interactive way without any need for a mediator. In this case, the preferences are revealed step by step during the successive interactions among the group members who fol- low a predefined interaction protocol, structured and mediated by a GDSS. In collective decision making, the solution for resource allocation may be evaluated from two main, often incompatible, perspectives: efficiency and fairness. Which one is the relevant measure to assess the resource allocation depends on the application domain. For example, while in e-commerce, fairness seems to be more relevant than efficiency, in supply chain efficiency is the main concern. In any case, most application domains require a trade-off between efficiency and fairness cri- teria. In some applications, the fairness measure is used in a complementary way as a stronger optimization constraint. If for example, the resource allocation problem yields multiple equally efficient solutions (i.e. Pareto optimal), the fairness measure is further used to discriminate the solution space. In Table 3.9 we synthesize, from Procaccia (2016) and Bouveret et al. (2016), several of the most frequently used criteria to quantify the quality of a collective solution for resource allocation. In formalization of criteria, we have a finite number of individuals N that need to agree on the division of a number of resources among 3.3 Further Extensions from Social Choice … 99 them. A stands for an allocation alternative of the entire set of resources, while Ai denotes the bundle of resources that are given to individual i (i 2 N) in the allo- cation A. Consequently, we have individual valuation function for the bundle of resources someone is receiving (uiðAiÞ; i 2 N), or individual valuation function for the bundle of resources someone else is receiving (uiðAjÞ; j 2 N) in an allocation A. Note that, if in the case of voting (see Sect. 3.2.2) the individuals express mainly ordinal preferences over a set of alternatives, in resource allocation, they commonly express cardinal preferences for a resource as a result of an individual valuation function. A cardinal preference is inherently a subjective value assigned to a resource and therefore when dealing with fairness criteria an objective judgment is practically impossible. SCT provides some feasible approaches to resource allocation. These solutions are roughly separated with respect to the type of resources that are allocated (di- visible or indivisible) and the approach used to solve the problem (centralized or decentralized). In the context of this book, the most relevant are the distributed approaches that try to optimize the fairness of allocation. Fairness criteria are a precondition for collective agreement inside the group over the final allocation. In addition, a centralized approach inherently presumes the adoption of a computed solution using a mathematical model which is not easily understandable and accepted by the group. Note that all the distributed approaches are suitable for computerized support from any GDSS technology available today. These protocols typically do not employ central computational power or complex elicitation schemes for the individual preferences. Moreover, they may be easily encoded in the structure of collaborative decision model (i.e. ThinkLet, see next Sect. 3.4). Table 3.10 synthetically describes the characteristics of distributed approaches developed in SCT to allocate resources in group decisions (Procaccia 2016; Bouveret et al. 2016). The procedures are classified according to the resource type in protocols for divisible and indivisible resources. An indivisible resource concerns a single divisible good that is going to be divided as fairly as possible among the members of a group. This problem is known in SCT as the “cake-cutting” problem as a metaphor for studying the of a heterogeneous, one-dimensional, resource. On the other hand, indivisible resources concern multiple items that usually cannot be shared among the group members and are completely allocated (no will remain unallocated and no payment for balancing the allocation is allowed). In the following subsection, we briefly present these protocols to

Table 3.10 Examples of distributed approaches for the fair-division of resources Type of resource Protocol Group size Characterization A. Divisible Cut-and-choose ¼2 Envy-free Selfridge-conway ¼3 Proportional Last-diminisher  3 Proportional B. Indivisible Adjusted winner ¼2 Envy-free Descending demand  3 Egalitarian 100 3 Collaborative Activities and Methods highlight the feasibility of their implementation with a GDSS. More details on this issue will be described in Sect. 3.4.

3.3.2.1 Interaction Protocols for Divisible Resources

In Cut-and-choose (N=2)Protocol, one individual divides the resource (i.e. the cake) into two pieces of equal value (in his valuation), and the other chooses one of them. The procedure guarantees proportionality and envy-free. If the division may be realized with more than one cut, the protocol does not ensure Pareto efficiency. Selfridge and Conway (N = 3) is considered the first interaction protocol achieving envy-freeness for a number of 3 individuals. The procedure guarantees proportionality, but not equitability, envy-free or Pareto efficiency. The protocol presumes the following steps (Brams and Taylor 1995): 1. Individual 1 “cuts” the resource into three pieces considered to be equal; 2. Individual 2 either “passes” if in his view the most valuable two pieces are tied or “trims” the most valuable piece to be, in his view, of equal value with the second. If passed, then the individuals will pick the parts in the order: 3, 2, 1; 3. If individual 2 did “trim”, then individuals 3, 2, 1 will pick their parts in that order, but require 2 to take the trimmed piece, unless individual 3 did. Keep the trimmings unallocated for now. 4. Now divide the trimmings. Whoever of individual 2 and 3 received the untrimmed piece does the “cutting”. Let individuals choose in this order: non-cutter, individual 1, cutter. Last-diminisher (N  3), also known as Banach-Knaster procedure, was proposed by Steinhaus (1948) as a generalization for a group-size greater than 3. This interaction protocol guarantees proportionality but not equitability, envy-free or Pareto efficiency. The protocol presumes the following steps: 1. Individual 1 “cuts” a piece from the resource that is considered by him to represent 1/n; 2. That piece is passed around the group. Each individual either lets it pass if he considers it too small or “trims” it to what he considers to be 1/n; 3. After the piece has made the full round, the last individual to “cut” something off (the “last-diminisher”) is obliged to take it; 4. The remaining individuals repeat the procedure for the rest of the resource (including the trimmings) until the remaining individuals are 2 when the “cut-and-choose” procedure will be used.

3.3.2.2 Interaction Protocols for Indivisible Resources

Adjusted Winner (N = 2) is described by (Brams and Taylor 1996) and applies to two individuals or two groups with different interests. The valuation function is 3.3 Further Extensions from Social Choice … 101 additive and may require splitting one of the resources at the end of the allocation protocol. The protocol is equitable, envy-free, Pareto efficient. It consists in the following sequence of steps: 1. Each individual distributes a given number of points (i.e. 100) over the resources to reflect his valuation for each of them; 2. Each resource is initially given to the individual that assigns it the greatest number of points. If we have a tie for a resource, that resource will not be assigned to any of them; 3. Each individual sums up the number of total points it has received and the one that has received the fewest number of points is now given the item that had a tie; 4. If the number of total points each individual has received is equal, we stop. Otherwise, resources are transferred from the richest to the poorest until the move of a resource will cause the richest to have fewer total points than the poorest. This last transferred resource needs to be split. Resources are trans- ferred in the ascending order of their associated score. 5. The score is computed by dividing the points given by the richest to a resource and the points given by the poorest to the same resource. Descending Demand (N  3) was proposed by Herreiner and Puppe (2002), and, contrary to the protocols presented before, presumes a linear ordering over subsets of resources. Therefore the protocol is not well-suited for a large number of resources. The protocol is egalitarian and Pareto efficient. The steps of the protocol are: 1. The protocol starts by fixing the order in which the individuals will express their preferred bundle of resources. 2. In this predefined order the individuals will first give the most preferred subset of resources. If after this round there is a feasible allocation by combining only the subsets of resources mentioned up to now, the protocol stops. Otherwise, following the same predefined order, each individual will give their second most preferred subset of resources, and try again to find a feasible allocation. 3. The protocol continues in this way until a feasible allocation is found. When there are multiple feasible solutions, the Pareto efficient one will be chosen. As may be noticed from the description of protocols for resource allocation, there is no procedure that is guaranteed to satisfy all fairness and efficiency criteria if the group size is greater than 3. Therefore, the right protocol should be always be selected with respect to problem and group characteristics (see Sects. 2.1.1 and 2.1. 3). Despite the large number of fair division algorithms that are reported in SCT literature, few of them were implemented and used in practice. One of the first implementations was realized by Budish (2011) to allocate MBA courses at the University of Pennsylvania. Since then, Goldman and Procaccia (2014) have built a website, called Spliddit, with some adaptations and implementations of these algorithms for solving practical problems. Currently Spliddit contains cloud-based implementations for resource allocation (i.e. division of rent, credit, fare, and tasks) and employs a mixed integer linear program to compute the solution. Nevertheless, 102 3 Collaborative Activities and Methods the increasing adoption of collaborative technologies will further extend the use of GDSS technology for applications where the resource allocation is the main concern.

3.3.3 Group Argumentation

In CDM, the consensus among participants with respect to a collective outcome is essential for its decision implementation phase. To reach consensus, the argu- mentation is the rational way to support opinions, to justify possible choices and/or to convince the other group members that one alternative is better or worse than another. Moreover, group argumentation is the common way to shape the features of ill structured problems. For that reason, there is no GDSS without dedicated tools to support group argumentation. These tools provide an external representation for the arguments in so called argumentation maps. By using specialized tools, such maps can be easily created, modified, and reviewed by the group in a collaborative working environment. Besides a structured way of communication among the group members, any argumentation map may additionally enforce some basic principles of rational argumentation (i.e. to consider all relevant arguments, positive or negative, and so on). For that reason, the first attempts to formalize the notion of dialectics are rooted in formal non-monotonic logics as a way to formalize the circumstances when the decision–makers will change their view when new infor- mation from other group members becomes available. Argumentation theory, among other reasoning formalism, is a common source of inspiration for many CDM tools that automate and support a rational argumentation framework. Indeed, there are many CDM systems that use argumentation to support different applica- tion domains, ranging from medical or urban planning to e-Government solutions. For example, Ouerdane et al. (2010) provide a good overview where the argu- mentation theory has been successfully implemented in many GDSSs. In argumentation theory, a conclusion is accepted or not based on the rela- tionship between the supporting and opposing arguments. Thus, given a set of conflicting arguments for a specific alternative, how do we reach a collective rational outcome? From the available set of conflicting arguments, how do we decide which arguments should be accepted, rejected or classified as being unde- cidable? In SCT, such questions are broadly analyzed inside an abstract argu- mentation framework that eliminates the specific structure of individual arguments. An early example of an abstract argumentation framework was proposed in (Dung 1995), where the argumentation is modeled as a directed graph, called defeat graph, with nodes representing the arguments and edges the “attack” relations between the nodes. In the graph, the arguments may be analyzed for their logical consistency by using some predefined criteria that identifies the sets of arguments that represent collectively a logical position. In argumentation theory, there are two basic approaches to defining the adequacy of arguments: extension-based semantics 3.3 Further Extensions from Social Choice … 103 proposed by Dung (1995), and labelling-based semantics proposed by Caminada (2006). Extension-Based Semantics defines representative patterns of inference in the abstract argumentation framework that can be easily verified for their logical coherence by using inference engines for non-monotonic logic programming as shown by Caminada et al. (2015). For example, there are many available solvers for Answer Set Programming (Baral 2003), a common approach in AI community for non-monotonic reasoning. The semantics may be any of the well-known approa- ches investigated in the knowledge engineering research filed, specifically in defeasible reasoning. For example, Baroni, Caminada and Giacomin (2011) provide a good overview over the most influential semantics used in abstract argumentation (e.g. grounded semantics, preferred semantics, stable and semi-stable semantics, stage semantics, ideal semantics and cf2 semantics). These semantics are imple- mented in different software libraries (ArgKit 2016; Toast 2016) that can be inte- grated in classical collaborative argumentation tools, such as: Compendium Institute (2016), Araucaria (2016), DebateGraph (2016). Meanwhile, from the initial development of extension-based semantics, many other possible extensions have been proposed to capture the specific decision-making context for abstract argumentation. The most notable ones are the introduction of preferences over arguments, their weights or probabilities. Their role is to give a formal structure where the arguments may be filtered and checked with respect to their rational coherence. If extension-based semantics are integrated in GDSS as an expert system soft- ware tool (see Sect. 2.3.4.2), labelling-based semantics requires from the partici- pants to evaluate the arguments and to assign to each of them a predefined characterization label, such as: accept, reject or undecided. Next, the individual evaluations are aggregated into a collective labelling that satisfies the criteria of logical coherence. If we consider arguments as propositions logically connected by the conditions of legal labelling, then the problem of labelling aggregation becomes similar with the problem of judgment aggregation discussed in Sect. 3.3.1. Nevertheless, if in judgment aggregation we have two possible values for a proposition, in labelling aggregation we have at least three possible values for an argument. The simplest way to aggregate the individual labelling would be to use plurality voting by selecting the label which appears most frequently in the indi- vidual labelling. As shown by Rahwan and Tohme (2010), despite the fact that plurality voting satisfies many of the properties discussed in SCT, such as anon- ymity, unanimity, independence and so on, it does not satisfy the property of collective rationality. Moreover, they proved that there is no aggregation operator that simultaneously satisfies collective rationality together with unanimity, anon- ymity and systematicity. A complementary approach to aggregate the individual labelling was proposed by Caminada and Pigozzi (2011). They showed that col- lective rationality may be achieved if it does not go against the individual labelling of arguments, and proposed three group decision processes (i.e. sceptical, credu- lous, super-credulous) to aggregate the collective labelling. The process presumes different phases in which the individuals are asked to approve or not the labels 104 3 Collaborative Activities and Methods assigned to some incoherent arguments, or to change the initial labelling in the light of new arguments. CDM tools for group argumentation usually combines both semantics (i.e. extension-based and labelling-based) in some pre-defined patterns of reasoning to validate the logical structure of a particular argument. In this way the coherence of the resulting collective argumentation map is enforced during group discussion. An early example of GDSS that employs the formalisms of group argumentation is HERMES, a system developed by Karacapilidis and Papadias (2001). It supports the argumentative discourses between group members by providing mechanisms for labelling-based semantics mapped on the gIBIS (Conklin and Begeman 1988). Nowadays, there are many collaborative argumentation tools and software libraries, such as Compendium Institute (2016), Araucaria (Reed and Rowe 2004)or DebateGraph (2016), that employ such formalism for group argumentations. Overall they are stand-alone tools, not yet integrated into classical GDSS software packages. The mapping between the outputs of such tools with the inputs required by other tools used in CDM is still a challenge. Despite the fact that the formal analysis of group argumentation has been studied for about two decades, due to the computational complexity of inferences its theoretical insights have been only recently introduced in special tools for group argumentation. Nevertheless, there are still many complementary challenges that are currently studied in SCT, particularly the representation of incomplete information in group argumentation frameworks or the complexity of manipulative actions that will influence the development of argumentation tools.

3.4 Collaboration Engineering

The possible employment of many different tools to support any phase of collab- orative decision making (CDM) has led to group/multi-participant decision support system (GDSS) with an open and dynamic composition of software collaboration tools implementing one of the different aggregation mechanisms discussed in the previous sections. Each of these software collaboration tools, such as brainstorm- ing, voting and ranking, multi-criteria analysis, audio-video conferencing, shared document editing, micro-blogging, and so on displays a high level of configura- bility, with tens of possible independent settings, which require a high level of expertise to effectively exploit the available functionalities. Indeed, several field studies of GDSS research have shown the tight correlation between the adequate tools-chain configuration and the effectiveness of GDSS when used to support CDM. Therefore, in the last 15 years, a significant body of research has been focused on designing reusable CDM practices that can be implemented in a sys- tematic way by non-expert users. Due to its similarities with the well-known field of software engineering, this body of research has been coined by (De Vreede et al. 2006)ascollaboration engineering. 3.4 Collaboration Engineering 105

The first issue in this endeavour was to identify the basic decomposition units for a CDM process, their structural properties, and their classification (see Sect. 3.4.1). Having the building blocks for a CDM process, the second concern is to define its workflow structure (see Sect. 3.4.2). All these elements may be viewed as a pattern language of designing CDM processes where some CDM activities are partially supported by a GDSS. With a relevant set of CDM models, the next step is to deploy, reuse, and maintain the coherence of these models with respect to the application context. The main approaches to elicit, store, and reuse the experiential knowledge reflected in CDM models will be discussed in Sect. 3.4.3.

3.4.1 Basic Collaboration Patterns

Despite its inherent specificity, any CDM model can be split into different classes or types of collaboration patterns. These collaboration patterns have some common characteristics in relation to how a group is transforming the pool of . Therefore they are seen as the primary unit of analyzing and decomposing the structure of a CDM process. Initially, Nunamaker et al. (1997) identified four procedures to be supported by a Group Support System (GSS) such as: idea gener- ation, idea organization, idea evaluation, and idea exploitation. More recently the set of procedures to be supported by software was extended and de Vreede et al. (2006) identified six collaboration patterns that are sufficient to describe any CDM process. Table 3.11, adapted from (Kolfschoten et al. 2015), summarizes the description of these collaboration patterns and some output criteria to measure their performance. Any of these patterns may be further specialized into subclasses or subpatterns to enhance the modularity and reuse of collaboration engineering knowledge. Conversely, all collaboration patterns are technology-independent and, therefore, they capture only the general aspects of the collaboration. Even if the main work on this topic was carried out for a particular GDSS (i.e. GroupSystems) (ThinkTank 2016), the patterns are equally applicable to any collaborative working environment. For example while OpenSpace-Online (2016) uses session topics templates for col- laboration patterns, Heiko (2012) uses a general collaborative activity model to integrate the CDM into the classical business process modelling notation. Related to software design patterns, the collaboration patterns of Table 3.11 provide a set of templates to decompose the CDM process, and, therefore, the different phases of Herbert Simon’s process model (see Sect. 2.1.1) may include one or more of them. In this way the possibility to reuse many collaboration patterns in a CDM process is significantly increased. The extent of reusing these patterns depends on many contextual variables (e.g. group composition, GDSS, problem types, etc.), including their degree of specialization (see Sect. 5.1). The specific implementation of any collaboration pattern requires additional data that depends on the specific software collaboration tool used to implement it. Briggs et al. (2003) introduced the thinkLet concept as an instantiation of a collaboration pattern. It includes specific technology-dependent attributes, such as: the software 106 3 Collaborative Activities and Methods

Table 3.11 Basic collaboration patterns (adapted from Kolfschoten et al. 2015) 1. GENERATE: the process of increasing the number of concepts and ideas (alternatives and objectives in a decision-making setting) used by a group to solve a CDM problem Specialized patterns Key performance indicators • Creativity: increase the number of new concepts • Number of unique contributions • Number of irrelevant concepts • Gathering: increase the amount of more complete • Completeness and relevant information • Relevance/usefulness • Reflecting: increase the amount of valid info shared • Validity and understood by the group • Completeness 2. REDUCE: the process of selecting from the pool of common knowledge the essential concepts that will be further used by a group to solve a CDM problem Specialized patterns Key performance indicators • Filtering: select the concepts that meet a specific • Amount of information selected criteria versus initial amount of information • Summarizing: select the concepts that reflect a • Representativeness shared understanding by the group • Parsimoniousness • Abstracting: select the concepts that minimize the • Representativeness need to focus on details • Completeness 3. CLARIFY: the process of increasing the level of shared understanding for the concepts used by a group to solve a CDM problem Specialized patterns Key performance indicators • Sense making: increase the level of shared meaning • Retrospectiveness of context and actions • Plausibility • Building shared understanding: increase the level • Conceptual learning of shared understanding of terminology 4. ORGANIZE: the process of deepening the understanding of relationships among the concepts used by a group to solve a CDM problem Specialized patterns Key performance indicators • Categorizing: increase the no. of categorical • No. of duplicates/correct categories relationships among concepts • Correct reports, etc. • Outlining: deepen the understanding of the logical • Perceived quality connections among concepts • Time spent outlining • Length of document • Sequencing: deepen the understanding of the • Flexibility sequential interactions among concepts • Scheduling performance, etc. • Causal decomposition: deepen the understanding • Model size of the causal relationships among concepts • Dynamic complexity • Modeling: decrease the overall complexity of the • Clarity, consistency, etc. concepts and their relationships 5. EVALUATE: the process of increasing the level of understanding for the relative value of the concepts used by a group to solve a CDM problem Specialized patterns Key performance indicators • Choice: increase the level of understanding of the • Consensus, acceptance concepts most preferred by the group • Satisfaction, correctness, expertise (continued) 3.4 Collaboration Engineering 107

Table 3.11 (continued) • Communication of preferences: increase the level • Mutual understanding of understanding of the individual preferences • Completeness of perspectives inside the group • Insight into conflicting perspectives • Qualitative evaluation: increase the level of • Validity understanding inside the group over the individual • Completeness preferences • Knowledgeable 6. BUILD CONSENSUS: the process of increasing the number of group members who are committed to implementing a solution to a CDM problem Specialized patterns Key performance indicators • Choice: increase the understanding of the most • Consensus preferred concepts of the group • Satisfaction • Acceptance method • Building agreement: increase the understanding of • Standard deviation the different preferences among participants • Perception of agreement • Building commitment: increase the level of • The percentage of participants that commitment with respect to collective proposals commit • Satisfaction collaboration tool, the tool configuration, and a detailed script on how the func- tionalities of the tool are going to be used. Consequently a thinkLet details the interactions among the group members during the CDM process, interactions that are mediated by one or possibly more software collaboration tools. Particularly, it defines the basic structure of any interaction protocol: (a) the actions that have to be executed by each decision-maker; (b) the rules restricting the decision-makers to a set of permissible actions; and (c) the actions that will be executed by the software collaboration tool as a consequence of decision-makers’ actions. In Table 3.12 is exemplified from Briggs et al. (2003) the “StrawPoll” thinkLet that instantiates the EVALUATE collaboration pattern by using Borda’s

Table 3.12 Example of a thinkLet Name StrawPoll Description Used to quickly find out the individual preferences in a group together with the level of consensus among participants Tool Voting tool Configuration Initialize the voting tool with the set of items to be voted, select an aggregation method for the individual preferences and establish the voting criteria Script We are going to take an ad hoc vote, not a final decision. The intention is to find if there are disagreements among us to better focus our next effort on what is important. I have sent you a ballot of na items. Please rank all items according to your preference. An item ranked on top of the list means the most preferred one, whilst the last ranked item the least preferred one. When the ranking is completed please click the “submit ballot” button. You have 5 min to complete the ranking. After 5 min we will review our collective preferences and the disagreements among as 108 3 Collaborative Activities and Methods aggregation rule discussed in Sect. 3.2.2, or more specifically the Communication of preferences sub-pattern (see Table 3.11). So far, there are over 100 thinkLets documented in literature. Note that in the context of decision support systems (DSS), the thinkLet concept was anticipated by the “declarative model of the experienced decision maker”, which was proposed by Filip (1990) in late 90s to help the user of a DSS build the model, select the appropriate solver, and evaluate the preliminary results provided by various optimization algorithms.

3.4.2 Collaborative Decision-Making Process

In the first chapter of this book we have outlined the main organizational trends in extended/virtual/networked enterprises that are constantly increasing the need for CDM. Furthermore, there is an increasing need to dynamically establish collabo- rative business processes that include many intertwined CDM activities. In col- laboration engineering, the CDM process is a thinkLets-chain that is executed in a sequential order to reflect the meeting agenda. A straightforward way to represent a CDM process would be to use Business Process Model and Notation (BPMN 2011). Many other possible process specification languages may be equally used to highlight some specific aspects. Examples include: hierarchical task networks, shared plans, workflows or collaborative processes. If we recall the example given in Sect. 1.5.3.1, the CDM process for the problem of designing and implementing a new information system may have for instance two possible approaches. The first approach would be a rational assessment,as exemplified in Sects. 3.2 and 3.3, of the three available alternatives (see Sect. 1.5.3. 1): to design the system from the software pieces which are available on the market, to adapt a system generator available for the application domain, and to use the services provided by a cloud computing company. The second one would be a kind of “social assessment” by using a negotiation process. Both approaches imply complex CDM processes that need to be decomposed into a chain of thinkLets. For example, the rational assessment CDM process may be decomposed (Fig. 3.1) into: 1. criteria generation, to generate the evaluation criteria for the available three alternatives (N.B. in the example from Sect. 1.5.3.1, we used four predefined criteria, but in general they are the result of a group deliberation; 2. criteria selection, to select the most relevant criteria if the list resulted from the previous step is excessively long; 3. weight assessments, to establish the weights for the selected criteria; 4. alternative evaluation, to evaluate the alternatives against the criteria established before. Similarly, the social assessment CDM process may be composed of just one thinkLet for the selection of best alternative by following a collaboration pattern for 3.4 Collaboration Engineering 109

Fig. 3.1 CDM process for implementing a new information system represented in BPMN (2011) consensus building (see Table 3.11). The description of each thinkLets used in Fig. 3.1 may be found in Briggs et al. (2003) and Hengst et al. (2004). From this very simple example we can make the following remarks: • a CDM process may present many decision nodes (the XOR gateways in Fig. 3.1) to capture the uncertainty associated with the execution of a thinkLet; • the execution of some thinkLets from the CDM process is optional, e.g. the criteria generation and the criteria selection thinkLets of Fig. 3.1 are not required if these criteria are already known; • there are many possible ways to structure a CDM process, e.g. rational assessment or social assessment for the selection of best alternative in Fig. 3.1; • any collaboration pattern from Table 3.11 may be implemented with several thinkLets; e.g. in Fig. 3.1 the weight assessments for the criteria used to evaluate the alternatives need an “evaluate” collaboration pattern that can be imple- mented by using one from the StrawPoll and RankOrder thinkLets; To afford these multidimensional aspects, (Briggs et al. 2014) proposed a six layer model to engineer the CDM process. In Table 3.13 are summarized and exemplified the concerns of each layer from this model. 110 3 Collaborative Activities and Methods

Table 3.13 The six layer model of CDM process (adapted from Briggs et al. (2014)) Layer Content Group goals Concerns group goals in terms of a desired state in the physical or knowledge space. In our previous example the goal is in the knowledge space, i.e. the best alternative for implementing a new information system Group Concerns the intermediate transformations of the current state, physical or deliverables conceptual, to reach the group goal. These transformations may be recorded as tangible object or intangible state in order to be reused during the same or other CDM processes. In our example, the deliverables are: the criteria to evaluate the alternatives, their weights, etc. Group activities Concerns the set of abstract sub-processes by which a group will create its deliverables. They are related to what the group will do, but not how it will do it which is the main concern of the next layer. The “group activities” layer has the role to reduce the cognitive complexity by allowing the decomposition of the “group goal” into achievable sub-goals. In the example from Fig. 3.1 some possible decompositions of the main goal into two sub-processes are represented with dashed rectangles: rational assessment and rational assessment. This decomposition may be recursive, i.e. rational assessment has been further decomposed in weight assessments and the other thinkLets, and is very dependent on how the CDM is engineered Group procedures Concerns the collaboration patterns that are used by the group to complete an activity. For example, in Fig. 3.1 we used five of the collaboration patterns described in Table 3.11. Note that before this level all the concerns are technology-independent, while the next layers strictly rely on the collaborative software tools that are used to support the CDM process Collaboration Concerns mainly the software functionalities the group is using to tools instantiate its group procedure. That is the collaborative software tool and its configuration as specified in Table 3.13. In general, this layer may concern as well others artefacts that are used by the group to complete its “group procedures”. A detailed description of some collaboration tools that may be used in CDM process will be discussed in Chap. 4 Collaboration Concerns structured interactions of the group members with the behaviors collaboration tools during their teamwork. More explicitly, it is related to the script component of a thinkLet (see for example in Table 3.13 the script for the StrawPoll thinkLet)

3.4.3 Deployment of Collaboration Models

The CDM process has been traditionally designed and deployed by an expert facilitator often named mediator or leader of the CDM-assisted session. The design of a CDM process means creating, documenting, and validating the logical sequence of activities to attain the group goal and the conditions under which these activities will be executed. Briefly, it includes a detailed description of each layer of a CDM process, as detailed in Table 3.13. On the other hand, the deployment of a CDM process model means implementing and executing it in an organization. 3.4 Collaboration Engineering 111

Therefore, the functions of the facilitator are complex and require special training. They are meant to: • offer technical assistance to the participants so they can become quickly acquainted with the GDSS to operate it efficiently; • design the CDM process model (often called session agenda); • control the activities included in the model during their implementation and ensure continuity among different CDM sessions at the organizational level. The International Association of Facilitators (IAF 2016) keeps a database with many facilitation techniques and methods that are documented and shared by expert facilitators to cope with all the complex functions mentioned before. The lack of well-trained facilitators in many organizations is recognized to be the major obstacle in limiting the adoption of GDSS technology (Briggs et al. 2003). Nevertheless, even for well-trained facilitators, due to the exponential rise in the configuration complexity of an increasing number of collaboration tools available on the market (see Sect. 3.4.1), they are constrained by the cognitive complexity associated with the design and execution of a CDM process which often come up with suboptimal solutions. In Sect. 5.3 we will detail a cloud-based GDSS implementation platform that considerably expands the design space of modelling the CDM process. For that reason, the collaboration engineering approach has divided the facilitators’ role into two distinct ones: (a) the designer of a CDM process, and (b) the executor or practitioner of a CDM process model. From this perspective, the key research challenge was to automate the role of the expert facilitator to help practitioners to design and deploy effective CDM processes for themselves with minimized support from professionals. As noted by Robert O. Briggs (2015), the automation of facilitators’ roles was the main research topic in collaboration engineering in the last two decades. There are two basic approaches to deal with this issue, centralized and decentralized. Note that here we are concerned with the process facilitator role and not the technical facilitation of assisting the users with the GDSS use. Nevertheless, beyond the CDM process design and deployment, the complete role of an expert facilitator is complex and not yet fully understood and explained. The first approach is following the traditional “expert systems” design methodology (see Sect. 2.3.4.2), by eliciting and codifying the expert facilitators’ knowledge into a knowledge-based of CDM process designs that is later used to train and guide the practitioners in building a CDM process model. This knowledge (i.e. model-based repository) may evolve to deal with slowly changing environ- ments and often use personal or organizational rules of thumb to deploy a CDM process. In the past, these methods have been considered “good-enough” given that in a slow changing environment efficient rules may be discovered and incrementally improved based on actual learning and experience. A well-documented example of this approach is the ActionCenters (Briggs et al. 2013) which embed knowledge about the CDM process models that can be used by practitioners to run well-designed processes without any training. Arcade (Mametjanov et al. 2011) and 112 3 Collaborative Activities and Methods

EasyMeeting (Camilleri and Zaraté 2014) are others prototype systems following the same approach. Basically, these applications are designed around a data-centric view of the world where the enterprise is seen as a hierarchical consolidation system (see Sect. 1.2.1). This approach works well for a single instance historical data and provides single users with standardized template models for particular business types, such as project management (Harder et al. 2005), crisis response manage- ment (Appelman and Driel 2005), scenario design (Enserink 2003), business pro- cess improvement (Amiyo et al. 2012) and so on. The second approach is to develop a collective memory for storing the collective knowledge and experiences invoked in the prior implementations of some CDM process models by possibly employing indirect coordination mechanisms as in crowdsourcing (see Sect. 3.1.4). This diachronic perspective over the GDSS capability to amplify the synergy of collective facilitation knowledge may be found in the early research on Social Decision Support Systems (Turoff et al. 2002)or Societal-Scale Decision Support Systems (Rodriguez and Steinbock 2004) that are to cope with heterogeneous user population and various types of problems. In this case we have a fine-grained intertwining between modelling and deployment of CDM processes. Theoretically, this approach takes the form of dynamic facilitation method proposed by Jim Rough as a substitute for the traditional facilitation style (Zubizarreta 2006). Unlike the conventional methods of facilitation, the dynamic method does not predefine the entire structure of the CDM process model, but is trying to support the creative process in elaborating alternatives for its design during its deployment. Examples include the employment of case-based reasoning tech- niques (Adla et al. 2010) or stigmergic coordination mechanisms (Zamfirescu et al. 2014). To design and deploy a CDM process, a practitioner using an automatic facil- itation tool is expected to iteratively execute the following actions (see the Simon’s model discussed in Sect. 2.1.1): • define problem (e.g. the CDM process model for the implementation of a new information system as exemplified in the previous section); • provide new design alternatives and/or evaluate the design alternatives for a CDM process provided by the system (e.g. the design alternatives represented with XOR nodes in Fig. 3.1); • select the best CDM process model to be deployed (e.g. the workflow of thinkLets, without the XOR nodes from Fig. 3.1 that is going to be executed); • deploy the selected model, and finally assess the performance and possibly refine the executed model. At present, the main distinction between the two approaches to design an automatic facilitation tool is relying on how the problem has been defined by the practitioner and the granularity of iterating these actions. For example, in the centralized approach, the automatic facilitation tool is expected to provide a com- plete structure of the CDM process model and consequently a complete agenda for its deployment. It includes all the necessary thinkLets, together with their 3.4 Collaboration Engineering 113 precedence constraints. This form of designing and deploying a CDM process model is well-suited when the thinkLets execution’s outcome is guaranteed, and simultaneously, the decision’s objectives remain stable. It basically corresponds to the hierarchical control scheme discussed in Sect. 1.2.1. On the other hand, the decentralized approach is well-suited in the following cases: • the outcome of executing a thinkLet is uncertain. For instance, if the number of evaluation criteria generated in the criteria generation phase is too low or too high in Fig. 3.1, we need to re-execute this phase or skip the next one, i.e. criteria selection; • the decision makers’ objectives are unstable or unknown from design phase. For instance, in Fig. 3.1, if we do not know from the design phase if it is worth or not to associate different weights to criteria, then the kind of aggregation rule which is well-suited to define the next thinkLet will be chosen only after the set of evaluation criteria will be known. In this case, the problem is decomposed into several levels of abstraction for which the CDM process model is executed. The approach is similar to the classical means-ends strategy proposed by Simon (1981) to control the search in a complex problem-space. It relates to the cooper- ative control schemes discussed in Sect. 1.2.2. Some more insights into the advantages and drawbacks of the two approaches will be presented in Sect. 5.1 as a result from an agent-based simulation used to investigate different use scenarios for the CDM process design in respect to some contextual factors, such as frequency and diversity of the problems for which the GDSS is used, users’ experience with the GDSS and so on.

3.5 Notes and Comments

The chapter that is to be completed at this point was meant to present specific methods for supporting collaborative work with particular emphasis on collabora- tive decision making. The main ideas to be retained by the reader are: • A collaborative group is made up of several members (the participants) who are assigned or decide by themselves to jointly attain a set of common goal by carrying out a set of activities and using a number of procedures and techniques. • The evolution of solutions provided by academia teams and industry to support collaborative activities was highly influenced by the advances in electronic communication and practical experience with group decision rooms and cor- responding software. • Under the influence of the new technologies, the size of the group has been evolved from few people located in a decision room to unlimited crowds that are, sometimes, called to solve complex problems 114 3 Collaborative Activities and Methods

• A collaborative activity often needs to construct a collective view of the whole group when many voting methods may be used to aggregate the individual preferences. • Any voting rule inherits certain axiomatic particularities that make them more or less susceptible to display paradoxical results. • All the voting rules are vulnerable to strategic manipulation; therefore an important aspect in collaborative decision making is related to the identification of incentives that encourage the group members to disclose their true preferences. • When the voting rules are supported by collaborative technologies additional concerns (e.g. access to voting results during the voting process, frequency of voting, dynamic listing of alternatives and so on) should be carefully addressed. • In the last decade the social choice theory is addressing various extensions with significant impact on deploying knowledge-driven collaborative tools for judgments aggregation, resources allocation, aggregation of arguments in structured communication frameworks, matchmaking, and many others. • The increasing number of collaboration tools and the possibility to employ them in any collaborative activity is exponentially raising the cognitive complexity associated with the design and execution of collaborative processes; therefore the automation of facilitators’ roles is the main challenge in collaboration engineering. • The collaboration engineering is contributing with a significant body of research focused on designing reusable collaboration practices that can be implemented in a systematic way by non-expert users. • The element of designing and deploying a collaborative activity is the identification of collaboration patterns that typify how the pool of common knowledge is transformed by the group. The reader interested in the topics of social choice theory may find additional information in several works such as Arrow et al. (2002, 2011), Gaertner (2009), Shmuel (2010), and Brandt et al. (2016). In particular, the results of the COST-IC1205 (2016) can provide additional valuable and up-dated insights with respect to computational aspects of this analytic work. In the following chapter we will review several modern information and com- munication technologies which exert a significant impact on the design and usage of group decision support systems.

References

Adla A, Zaraté P, Soubie J L, Taghezout N (2010) Case and model based hybrid reasoning for group decision making. In: Respício A, Adam F, Phillips-Wren G, Teixeira C, Telhada J (eds), Frontiers in AI and Applications, vol 212, IOS Press, Amsterdam, The Netherlands: p. 309–320. Amiyo M, Nabukenya J, Sol H J (2012) A repeatable collaboration process for exploring business process improvement alternatives In: Proceedings of the 45th Hawaiian Internal Conference on System Sciences(HICS), IEEE Computer Society Press, Los Alamitos: p. 326–335. References 115

Appelman J H, Driel J (2005) Crisis-response in the Port of Rotterdam: can we do without a facilitator in distributed settings? In: Proceedings of the 38th Hawaiian Internal Conference on System Sciences, IEEE Computer Society Press, Los Alamitos: p. 17b. ArgKit (2016) ArgKit. http://www.argkit.org/. Accessed 15 July 2016. Arrow K J (1963) Social Choice and Individual Values. 2nd edition, Cowles Foundation, New Haven. Arrow K J, Sen A K, Suzumura K (eds) (2002). Handbook of Social Choice and Welfare. Vol. 1, Elsevier. Arrow K J, Sen A K, Suzumura K (eds) (2011). Handbook of Social Choice and Welfare. Vol. 2, Elsevier. Baral C (2003) Knowledge Representation, Reasoning and Declarative Problem Solving. Cambridge University Press, Cambridge. Baroni P, Caminada M, Giacomin M (2011) An introduction to argumentation semantics. The Knowledge Engineering Review, 26: 365–410. Borda de J-C (1781) Mémoire sur les élections au scrutin. Histoire de l’Académie Royale des Sciences, Paris. Bouveret S, Chevaleyre Y, Maudet N (2016) Fair allocation of indivisible goods. In: Brandt F, Conitzer V, Endris U, Lang J, and Procaccia A D (eds.), Handbook of Computational Social Choice. Cambridge University Press: pp. 284–310. BPMN (2011) Business Process Model And Notation (BPMN) Version 2.0 (available at: http:// www.omg.org/spec/BPMN/2.0/ accessed on 15.07.2016). Brabham D C (2008) Crowdsourcing as a model for problem solving. An introduction and cases. Convergence: the International Journal of Research into the New Media Technologies, 14(1): 75–90. Brabham D C (2013) Crowdsourcing. MIT Press, Cambridge, Massachusetts. Brams S J, Fishburn P C (1978) Approval Voting. The American Political Science Review, 72(3): 831–47. Brams S J, Taylor A D (1995) An envy-free cake division protocol. American Mathematical Monthly, 102(1): 9–18. Brams S J, Taylor A D (1996) Fair Division: From Cake-cutting to Dispute Resolution. Cambridge University Press. Brandt F, Conitzer V, Endriss U, Lang J, Procaccia A D (eds) (2016) Handbook of Computational Social Choice. Cambridge University Press. Briggs R O (2015) Computer-supported cooperative work and social computing, Business & Information Systems Engineering, 7(3): 217–220. Briggs R O, de Vreede G J, Nunamaker Jr J F (2003) Collaboration engineering with ThinkLets to pursue sustained success with GSS, Journal of Management Inf. Systems, 19: 31–63. Briggs R O, Kolfschoten G L, de Vreede G J, Albrecht C, Lukosch S G, Dean D L (2014) Six Layer Model of Collaboration, In: Nunamaker Jr J F, Romano N C, and Briggs RO Collaboration Systems: Concept, Value, and Use, M.E. Rutledge, Taylor & Francis Group, London: p: 211–228. Briggs R O, Kolfschoten G L, Vrede de G J, Albrecht C, Lukosch S, Dean D L (2015). A six-layer model of collaboration. In: Nunamaker J F, Romano Jr N C, and Briggs R O (eds) Collaborative Systems: Concept, Value, and Use. Routledge, Taylor & Francis Group, London: 211–227. Briggs R O, Kolfschoten G L, Vreede G J, Lukosch S, Albrecht C C (2013) Facilitator-in-a-box: process support applications to help practitioners realize the potential of collaboration technology, Journal of Management Information Systems, 29: 159–193. Budish E (2011) The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. Journal of Political Economy 119(6): 1061–1103. Camarinha-Matos L M, Afsarmanesh H, Galeano N, Molina A (2009) Collaborative networked organizations. Concepts and practice in manufacturing enterprise. Computers & Industrial Engineering, 57: 46–60. 116 3 Collaborative Activities and Methods

Camilleri G, Zaraté P (2014) EasyMeeting A Group-Decision Support System, IRIT Report: RR– 2014–10—FR, (available at: ftp://ftp.irit.fr/pub/IRIT/SMAC/DOCUMENTS/PUBLIS/ EasyMeeting.pdf, accessed on 16.07.2016). Caminada M (2006) On the issue of reinstatement in argumentation. In: Proceedings of the 10th European Conference on Logics in Artificial Intelligence (JELIA). Vol. 4160 of Lecture Notes in Computer Science, Fisher M, van der Hoek W, Konev B, and Lisitsa A, eds, Springer: p. 111–123. Caminada M W A, Alcântara S, Sá J, Dvořák W (2015) On the equivalence between logic programming semantics and argumentation semantics. International Journal of Approximate Reasoning, 58: 87–111. Caminada M W A, Pigozzi G (2011) On Judgment aggregation in abstract argumentation. Journal of Autonomous Agents and Multi-Agent Systems, 22(1): 64–102. Chen F, Briggs R O (2015) An empirical test of the focus theory of group productivity. In: Nunamaker Jr. J F, Romano N C, and Briggs R O (eds) Collaboration Systems: Concept, Value and Use, Rutledge, Taylor & Francis Group. London: 56–82. Cheng K-E, Deek F P (2012) Voting tools in group decision support systems: theory and implementation. Int. J. Management and Decision Making,12(1): 1–20. Chevaleyre Y, Dunne P E, Endriss U, Lang J, Lemaître M, Maudet N, Padget J, Phelps S, Rodríguez-Aguilar J A, Sousa P (2006) Issues in multiagent resource allocation. Informatica, 30, p. 3–31. Chevaleyre Y, Lang J, Maudet N, Monnot J (2011) Compilation and communication protocols for voting rules with a dynamic set of candidates. In: Proceedings of the 13th Conference on Theoretical Aspects of Rationality and Knowledge (TARK), 153–160, ACM, New York. Chiu C M, Liang T P, Turban E (2014) What can crowdsourcing do for decision support? Decision Support Systems, 65: 40–49. Compendium Institute (2016) Compendium institute (available at http://compendiuminstitute.net, accessed 15 July 2016). Condorcet de M. (1785) Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix. Imprimerie Royale, 1785. Facsimile published in 1972 by Chelsea Publishing Company, New York. Conitzer V, Sandholm T (2005) Communication complexity of common voting rules. In: Proceedings of the 6th ACM conference on Electronic Commerce: pp. 78–87. Conitzer V, Walsh T (2016) Barriers to manipulation in voting. In: Brandt F, V Conitzer V, Endriss U, Lang J, and Procaccia A D (eds), Handbook of Computational Social Choice. Cambridge Univ. Press: p. 127–145. Conklin J, Begeman M (1988) gIBIS: A hypertext tool for exploratory policy discussion, Proceedings of the 1988 ACM conference on Computer-supported Cooperative Work (CSCW ‘88): 140–152. COST-IC1205 (2016). COST Action IC1205 on Computational Social Choice (available at: http:// www.illc.uva.nl/COST-IC1205/ accessed on 15.06.2016). DebateGraph (2016) DebateGraph. (available at http://debategraph.org, accessed 15.07.2016). Debian (2016) Debian Voting Information, Available at: http://www.debian.org/vote/ accessed on 14.07.2016). Dennis A R, George J F, Jessup L M, Nunamaker Jr. J F, Vogel D R (1998) Information technology to support electronic meetings, MIS Quarterly, 12(4): 591–624. Dietrich F, List C (2007) Judgment aggregation by quota rules: majority voting generalized, Journal of Theoretical Politics, 19(4): 391–424. Dietrich F, Mongin P (2010) The premise-based approach to judgment aggregation. Journal of Economic Theory 145(2): 562–582. Dung P M (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77(2): 321–358. Endriss U (2016) Judgment Aggregation. In: Brandt F, Conitzer V, Endriss U, Lang J, and Procaccia A D (eds.), Handbook of Computational Social Choice. Cambridge University Press: p. 399–426. References 117

Enserink B (2003) Creating a scenariologic—design and application of a repeatable methodology. In: Proceedings of the 36th Hawaiian Internal Conference on System Sciences, IEEE Computer Society Press, Los Alamitos: p. 21. Estellés-Arolas E, Gonzales-Ladron-de-Guevara F (2012) Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2): 189–200. Felsenthal D S (2010) Review of paradoxes afflicting various voting procedures where one out of m candidates (m  2) must be elected. In: Assessing Alternative Voting Procedures, London School of Economics and Political Science, London, UK. Filip F G (1990). System analysis and expert systems techniques for operative decision making. Journal of Syst. Anal. Model. Simul. 8(2): 296–404. Fishburn P C (1977) Condorcet social choice functions, SIAM Journal on Applied Mathematics, 33: 469–489. Fishburn P C, Brams S J (1983) Paradoxes of preferential voting, Mathematics Magazine, 56: 207–214. Gaertner W (2009) A Primer in Social Choice Theory. Revised Edition. Oxford University Press. Gavish B, Gerdes J H (1997) Voting mechanisms and their implications in a GDSS environment, Annals of Operations Research, 71: 41–74. Gibbard A (1973) Manipulation of voting schemes: A general result. Econometrica, 41: 587–601. Goldman, J, Procaccia A D (2014) Spliddit: Unleashing fair division algorithms. SIGecom Exchanges 13(2): 41–46. Gray P, Johansen B, Nunamaker J, Rodman J, Wagner G R (2011) GDSS past, present, and future. In: Decision Support. An Examination of the DSS Discipline. Schuff D, Paradice D, Burnstein F, Power D J, and Sharda R (eds) Springer Science + Business Media LLC, New York: p. 1–24. Haines C G, Hains B H (1921) Principles and Problems of Government. Harper & brothers, digital version 2008. Harder R J, Keeter J M, Woodcock B W, Ferguson J W, Wills F W (2005) Insights in implementing collaboration engineering. In: Proceedings of the 38th Hawaiian Internal Conference on System Sciences, IEEE Computer Society Press, Los Alamitos. Hare T (1859) Treatise on the Election of Representatives, Parliamentary and Municipal. Longman, Green, Reader, and Dyer, London. Heiko T(2012) Cloud-based collaborative decision making, Journal of Decision Support System Technology 4: 39–59. Hengst M. den, van de Kar E A M., Appelman J (2004) Designing mobile information services: User requirements elicitation with GSS design and application of a repeatable process. In: Proceedings of the 37th Annual Hawaii Int. Conf. on System Sciences, IEEE Computer Society; 5–8 January, Big Island, Hawaii: p. 1-10. Herreiner D, Puppe C (2002) A simple procedure for finding equitable allocations of indivisible goods, Social Choice and Welfare, 19, (2): 415–430. Hopper A, Needham R M (1988). The Cambridge fast ring networking system. Computers, IEEE Transactions on, 37(10): 1214–1223. Howe J (2006) The rise of crowdsourcing. Wired 14(6): 176–183. IAF (2016) IAF Methods Database (available at: http://www.iaf-methods.org accessed on 15.06. 2016). Karacapilidis N, Papadias D (2001) Computer supported argumentation and collaborative decision making: The Hermes system, Information Systems, 26(4): 259–277. Koch M, Schwabe G, Briggs R O (2015) CSCW and Social Computing, Bus Inf Syst Eng, DOI 10.1007/s12599-015-0376-2. Kock N (2005) What is e-collaboration. Editorial essay. International Journal of e-Collaboration 1(1):I–VII. Kock N (2009) A basic definition of e-collaboratiotion and underlying concepts. In: Kock N (ed) E-collaboration: Concepts Methodologies, Tools and Applications. IGI Global, p. 1–7. Kock N, Davison R, Wazlawic R, Ocker R (2001) E-collaboration: A look at past research and future challenges. Journal of Systems and Information Technology, 5(1):1–8. 118 3 Collaborative Activities and Methods

Kock N, Nosek J (2005) Expanding the boundaries of e-collaboration. IEEE Transactions on Professional Communication, 48 (1):1–9. Kolfschoten G L, Lowry P B, Dean D L, de Vreede G-J, Briggs R O (2015) Patterns in collaboration. In: Nunamaker Jr J F, Romano Jr. N C, Briggs R O (eds.), Collaboration Systems: Concept, Value, and Use, Rutledge, Taylor&Francis Group, London: p. 3–105. Kolfschoten G L, Nunamaker Jr. J F (2015a) Organizing the theoretical foundation of collaboration engineering. In: Nunamaker Jr. J F, Romero Jr. N C, and Briggs R O (eds) Collaboration Systems: Concept, Value, and Use. Routledge, Taylor and Francis Group, London: p. 27–41. Kolfschoten G L, Nunamaker Jr. J F (2015b) Collaboration support technology patterns of successful collaboration support based on three decades of GSS research and use. In: Nunamaker Jr. J F, Romero Jr. N C, and Briggs R O (eds) Collaboration Systems: Concept, Value, and Use. Routledge, Taylor and Francis Group, London: p. 189–198. Kornhauser L A, Sager L G (1993) The one and the many: adjudication in collegial courts. California Law Review, 81(1):1–59. Kramer G H (1977) A dynamical model of political equilibrium, J. Economic Th. 16: 310–334. List C, Pettit P (2002) Aggregating sets of judgments: an impossibility result, Economics and Philosophy, 18: 89–110. List C, Pettit P (2004) Aggregating sets of judgments: two impossibility results compared. Synthese 140(1–2): 207-235. List, C (2013) Social choice theory In: Stanford Encyclopedia of Philosophy (Vinter 2013 Edition, Zalta E N (ed) (available at http://plato.stanford.edu/entries/social-choice/ accessed on 14.07. 2016). Mametjanov A, Kjeldgaard D, Pettepier T, Albrecht C, Lukosch S, Briggs R (2011) Arcade: Action-centered rapid collaborative application development and execution, 44th Hawaii International Conference on System Sciences (HICSS), IEEE Computer Society Press: p. 1–10. May K (1952) A Set of independent, necessary and sufficient conditions for simple majority decision. Econometrica, 20(2–3): 680–684. Mittleman D D., Murphy J D, Briggs R O (2015) Classification of collaboration technology. In: Nunamaker Jr. JF, Romero Jr. N C, Briggs R O (eds) Collaboration Systems: Concept, Value, and Use. Routledge, Taylor and Francis Group, London: p. 42–54. Montague M, Aslam J A (2002) Condorcet fusion for improved retrieval. In: Proceedings of the Eleventh International Conference on Information and Knowledge Management (CIKM’02), ACM, McLean, Virginia, USA: 538–548. Moulin H (1983) The Strategy of Social Choice. North Holland, Amsterdam. Moulin H (1988) Condorcet’s principle implies the no show paradox, Journal of Economic Theory 45: 53–64. Munkvold B E, Anson R (2001) Organizational adoption and diffusion of electronic meeting systems: A case study. In: Ellis C S, Zigurs I (eds) Proceedings of the 2001 International ACM SIGGROUP of Supporting Group Work. ACM, New York: p. 279–287. Nof S Y, Ceroni J, Jeong W, Moghaddam M (2015) Revolutionizing Collaboration through e-Work, e-Business and e-Service. Springer-Verlag, Berlin. Nunamaker Jr. J F, Dennis A R, Valacich J S, Vogel D, George J F (1991) Electronic meeting systems to support group work. Communications of ACM, 34(7): 40–61. Nunamaker Jr J F, Briggs R O, Mittleman D D, Vogel D R, Balthazard PA (1997) Lessons from a dozen years of group support systems research: A discussion of lab and field findings. Journal of MIS, 13(3):163–207. Nunamaker Jr. J F, Briggs R O, Romero Jr. N C (2015a) Introduction to collaboration systems. Part I: A brief history and lessons learned. In: Nunamaker Jr. J F, Romero Jr. N C, Briggs R O (eds). Collaboration Systems: Concept, Value and Use. Routledge, Taylor and Francis Group, London: p. 1–8. Nunamaker, Jr., J F, Romero Jr., N C, Briggs R O (2015b) Collaboration Systems. Part II: Foundations. In: Nunamaker Jr. J F, Romero Jr. N C, Briggs R O (eds). Collaboration Systems: Concept, Value and Use. Routledge, Taylor and Francis Group, London: p. 9–23. References 119

OpenSpace-Online (2016) (available at: http://openspace-online.com/ accessed on 15.07.2016). Ouerdane W, Muadet N, Tsoukiàs A (2010) Argumentation theory and decision aiding In: Figueira J, Greco S, and Ehrgott M (eds) International Series in Operations Research and Management Science, 1, Vol. 142, Trends in Multiple Criteria Decision Analysis: p. 177–208. Plott C R (1973) Path independence, rationality, and social choice, Econometrica, 41:1075–1091. Procaccia A D (2016) Cake cutting algorithms In: Brandt F, Conitzer V, Endriss U, Lang J, and Procaccia A D (eds) Handbook of Computational Social Choice. Cambridge University Press, p. 311–322. Rahwan I, Tohḿe F (2010) Collective argument evaluation as judgment aggregation. In: van der Hoek, Kaminka, Lesṕerance, Luck, and Sen (eds), Proc. of 9th Int. Conf. on Autonomous Agents and Multiagent Systems: p. 417–424. Reed C, Rowe G (2004). Araucaria: Software for argument analysis, diagramming and representation. International Journal of AI Tools, 14:3–4: 961–80. Ricci F, Rokach L, Shapira B, Kantor P B (2011) Recommender Systems Handbook. Springer, New York. Rodriguez M A, Steinbock D J (2004) Group holographic modeling for societal-scale decision-making systems. In: Proc. of the N. American Assoc. for Computational Social and Organizational Science Conf., Pittsburgh. Roy S R (2012) Virtual collaboration. The skills needed to collaborate in a virtual environment. Journal of Internet Social Networking & Virtual Communities, 1:1–8. Sajithra K, Patil R (2013). Social media–history and components. Journal of Business and Management, 7(1):69–74. Satterthwaite M A (1975). Strategy proofness and Arrow’s conditions: Existence and correspon- dence theorems for voting procedures and social welfare functions. Journal of Economic Theory: 10:187–217. Sen A K. (1984) Resources, Values and Development, Basil Blackwell, Oxford. Shim J P, Warketin M, Courtney J F, Power D J, Sharda R, Carlsson C (2002) Past, present and future of decision support technology. Decision Support Systems, 33(2):111–126. Shmuel N (2010) Collective Preference and Choice. Cambridge University Press. Simon H A (1981) The sciences of the artificial. MIT Press, Cambridge, Mass. Simpson P B (1969) On defining areas of voter choice, Quarterly J. Econ. 83:478–490. Smith J (1973) Aggregation of preferences with variable electorate. Econometrica, 41(6):1027– 1041. Steiner I D (1972) Group Process and Productivity. Academic Press, New York. Steinhaus H (1948) The problem of fair division. Econometrica, 16:101–104. Suduc A M, Bizoi M, Filip F G (2009). Exploring multimedia web conferencing. Informatica Economica, 13(3):5–17. ThinkTank (2016) The platform for collective intelligence (2016) (available at: http://thinktank.net/ accessed on 15.06.2016). Tideman T N (1987). Independence of clones as a criterion for voting rules. Social Choice and Welfare, 3:185–206. TOAST (2016) The Online Argument Structures Tool. http://toast.arg-tech.org. Accessed 15 July 2016. Tomlinson R (2009) The first network email. A History from Ray Tomlinson, the Inventor of Email. Raythen. BBN Technologies. Turban E, Liang T P, Wu S P J (2011) A framework for adopting collaboration 2.0 tools for virtual group decision making. Group Decision and Negotiation 20(2):137–154. Turoff M (1991). Computer-mediated communication requirements for group support. Journal of Organizational Computing and Electronic Commerce, 1(1): 85–113. Turoff M., Hiltz S., Cho H., Li Z, Wang Y (2002) Social decision support system (sdss), Proceedings of the 35th Hawaii International Conference on System Sciences, IEEE Press. Vreede de G J, Briggs R O, Kolfschoten G L (2006). ThinkLets: A pattern language for facilitated and practitioner-guided collaboration processes, International Journal of Computer Applications in Technology, 25(2–3):140–154. 120 3 Collaborative Activities and Methods

Young H P (1974) An axiomatization of Borda’s rule, Journal of Economic Theory,9:43–52. Zamfirescu B C, Candea C, Radu C (2014) A stigmergic approach for social interaction design in collaboration engineering, Neurocomputing, Elsevier, 146:151–163. Zhong H, Reyes Levalle R, Moghaddam M, Nof S Y (2015) Collaborative intelligence - definition and measured impacts on internetworked e-work. Management and Production Engineering Review, 6(1):67–78. Zubizarreta R (2006) Manual for Jim Rough’s Dynamic Facilitation Method. The Co-Intelligence Institute, (available at: http://www.co-intelligence.org/DFManual.html, accessed on 16.07. 2016). Zwicker W S (2016) Introduction to the theory of voting. In: Brandt F, Conitzer V, Endris U, J. Lang J, and Procaccia A D(eds.), Handbook of Computational Social Choice. Cambridge University Press: p. 23–56. Chapter 4 Essential Enabling Technologies

The previous chapter contained presentations of several relevant collaboration methods. Section 3.1 indicated that the evolution in e-collaboration was associated with a number of technological breakthroughs. At present, collaborative decision-making activities are carried out in a rather effective manner, due to advances of several technologies. The decision-making is faster and based on accurate data. Ever more people can be involved in decision-making activities in a more comfortable way. The objective of this chapter is to provide an informative view of several modern Information and Communication Technologies (I&CT) and the roles they play in enabling and even influencing the collaboration with par- ticular emphasis on computer supported collaborative decision-making process. The chapter is organized in sections containing parallel presentations of several modern technologies. Each section first presents the concept and then continues by reviewing the applications with particular emphases on the value the technology can deliver to collaborative decision-making activities. The relevant standards are reviewed too. The first section addresses business intelligence and analytics (BI&A) used nowadays by many companies to extract information from data, to predict business evolution/course and increase their profit. In the next two sections, we present web technologies and their applications and social networks, respec- tively. In the fourth section, there are presented specific mobile computing tools and platforms used in collaborative systems and processes. In the fifth section, we make a review of biometric technologies, meant to ensure the authentication of the people involved in the collaborative decision-making. The serious digital games are addressed in the sixth section. The seventh section contains several notes, com- ments and recommendations for readers who are interested in an in-depth approach of these technologies.

© Springer International Publishing AG 2017 121 F.G. Filip et al., Computer-Supported Collaborative Decision-Making, Automation, Collaboration, & E-Services 4, DOI 10.1007/978-3-319-47221-8_4 122 4 Essential Enabling Technologies

4.1 Modern Data Technologies

In the mid-1970s, Steven Alter (1977, 1980) classified decision support systems (DSS) in accordance with the generic operations they performed independent of the application type and the decision-maker characteristics, such as user’s application-related knowledge, goals pursued, and computer usage skills (called “How”-type knowledge). In Alter’s taxonomy, there were two particular DSS subclasses: (a) data-oriented DSS, meant to support data retrieval and analysis and (b) model-oriented DSS, to perform operations, such as optimizations, simulations and/or other computations, which could recommend a certain alternative. A further analysis of the data-oriented DSS subclass allowed the identification of other three more particular subclasses, such as: • File drawer DSS, meant to provide an effective access to data through simple queries and adequate reporting; • Data analysis DSS, which could perform operations, such as data manipulation and can be regarded as early versions of data warehousing systems of 1990s; • Information analysis DSS, which anticipated current business intelligence and analysis (BI&A) tools. In 2002, Power grouped the above three subclasses under the name of data driven DSS (see Sect. 2.3). A similar terminology was adopted by Dahr and Stein (1997) in Chap. 4 of their book. Since Alter’s original taxonomy and Power’s classification, the economic and societal landscape has been significantly changed under the influence of hyper-connected interaction patterns of people, machines, and business entities (Filip and Herrera-Viedma 2014). There are voices that state that the world is increasingly driven by insights derived from big data (WEF 2013) and the future society will be driven by data (Power 2015). The evolution was enabled and stimulated by several technological and conceptual developments. The technology has rapidly evolved, data have been accumulated into huge data sets and collaborative decision-making models have received traction. This section is meant to present the main aspects concerning big data (BD) phenomenon, current busi- ness intelligence and analytic solutions which are relevant for data-driven decision making support with particular emphasis on collaborative decision making (CDM) activities.

4.1.1 Data-Driven Decision Support Systems

Brynjolfsson et al. (2011) developed a measure of the use of data-driven decision making (DDD) that was meant to capture business practices through collecting and analysing the external and internal data of an enterprise. Having surveyed 179 companies both from the private industry and public sector of the United States, the above authors examined the relation between DDD and productivity. They found 4.1 Modern Data Technologies 123 out that DDD could be associated with a 5–6 % increase in the companies’ outcome and productivity-compound to what can be explained by traditional inputs and IT usage. Similar results were obtained by Tene and Polonetsky (2012) who estimated that a 5–6 % of the economic gains is due to data-driven decision making. One year later, Provost and Fawcett (2013) noticed the early adopters of data-driven decision support systems were the companies of the finance and telecom sectors. Their executives perceived the opportunity offered by such systems to base their decisions on data analysis, rather than on human intuition only. At the same time, several software companies, such as Business Objects, Cognos, Hyperion, Terradata and so on perceived the business opportunity and offered tools for data driven decision support, beside the major IT vendors, such as IBM, Oracle and Microsoft. It is admitted (Power 2008) that it was Inmon’s(1981) book that provided a conceptual foundation for modern data-driven DSS and influenced both the aca- demia circles and practitioners to adopt a new concept and an advanced technology. Power (2008) noticed that, at present, data driven decision support is used for a broad spectrum of purposes including operational and strategic business intelli- gence queries, real-time supervision and performance monitoring, and CRM (Customer Relationship Management). A possible non-exhaustive set of major characteristic features of a data-driven DSS is proposed by Power (2008). The set includes: • Ad-hoc data filtering and retrieval, possibly including drilling-down capabilities to change the aggregation level from the most summarized data one to more detailed ones; • Creating data displays allowing the user to choose the desired format (scatter diagrams, bar and pie charts and so on) or/and to perform various effects, such as animation, playing back historical data and so on; • Data management; • Data summarization, including the possibility to customize the data aggregation format to perform the desired computations or to examine the data from various perspectives; • Spreadsheet integration; • Metadata creation and retrieval; • Report designing, generation and storing, in order to be used or distributed via electronic documents or posted on webpages; • Statistical analysis, including data mining for discovering useful relationships; • Viewing predefined data displays, such as dashboards similar to the ones installed on vehicle boards or scorecards, to display performance metrics and various indicators. 124 4 Essential Enabling Technologies

4.1.2 Big Data

According to (Diebold 2012; Fan and Bifet 2013), the term Big Data (BD) appeared in 1998 in a slide deck of J. Mashey entitled “Big Data—The Next Wave Infrastructure”. The concept itself and the associated remarkable movement that can be noticed both in academia and in industry received a special attention from mass media (Cukier 2010) and governments (Weiss and Zgorski 2012). Big Data developments are tightly linked to the dramatic increase of the volume of the stored data, as well as the development of IT products, meant to efficiently store, manage and process huge data sets. Hu et al. (2014) presented four major stages in the increase of stored data volumes together with associated technologies: • From Mega (106)toGigabyte (109) increase noticed in the eighth decade of the previous century, associated with database machines; • From Giga to Terrabyte (1012) increase of the late 1980s, associated with the advent of the parallel data base technology; • From Terra to Pentabyte (1015) increase of the late 1990s, associated with Google file system and Map Redundancy technology; • From Penta to Exabyte (1018) increase of the lates years. There are several perspectives to view the meaning of the term Big Data. For example, Bange et al. (2013, p. 12) used Big Data to describe a set of “methods and technologies for the highly scalable loading storage and analysis of unstructured data”. In their view, “Big Data technologies can help companies to manage huge data volumes, perform complex analysis and real-time integration of data from a variety of data structures and sources”. Other authors define the term by enumer- ating its characteristics. For example, Madden (2012) offered a simple and intuitive definition of Big Data: a data set that is: (a) too big (it has a large scale and various sources); (b) too fast, (and needs quick processing); (c) too hard (and cannot be managed by traditional data base management systems). Laney (2001) of META (at present Gartner) Group defined Big Data by the three V’s set of characteristics: (a) volume, (b) velocity, and (c) variety. A fourth dimension, namely veracity, was added by IBM researchers (Zikopoulos and Eaton 2011; Zikopoulos et al. 2013)to indicate that the data were accurate and true (Table 4.1). Chen et al. (2014) ana- lyzed the relationships between Big Data and other modern technologies, such as Cloud Computing and Internet of Things (IoT). Big Data application fields are numerous and diverse (Bughin et al. 2010; Chen and Zhang 2014; Chen et al. 2012; Manyika et al. 2011; Shi 2014; Sagiroglou and Sinane 2013). As expected, the web-based companies, as Amazon.com, e-Bay and Google, are leaders in using big data applications. Financial companies, manu- facturing firms and other organizations from public and private sectors are taking advantage of the immense volumes of stored data and available technologies. The value extraction from Big Data volumes is carried out by using Business Intelligence and Analytics (see Sect. 4.1.3). 4.1 Modern Data Technologies 125

Table 4.1 Big data attributes Volume measures the amount of data available and accessible to (adapted from Kaisler et al. the organization 2013) Velocity is a measure of the speed of data creation, streaming and aggregation Variety measures the richness of data representation: numeric, textual, audio, video, structured, unstructured and so on Value is a measure of usefulness and usability in decision making Complexity measures the degree of interconnectedness, interdependence in data structures and sensitivity of the whole to local changes Veracity measures the confidence in the accuracy of the data

4.1.3 Business Intelligence and Analytics

The term Business Intelligence (BI) was proposed in 1989 by H. Dresner, an analyst of the Gartner Group, to name “all technologies that help business make decision on facts” (Nylund 1999). Negash and Gray (2008)define BI as a “data driven DSS that combines data gathering, data storage, and knowledge management with analytical tools to present complex and competitive information to planners and decision makers”.

4.1.3.1 Capabilities

According to Gartner Group, Business Intelligence (BI) is defined as a software platform that delivers a set of capabilities organized into three classes of functions (Hagerty et al. 2012; Chen et al. 2012): • Integration, that includes: BI infrastructure, metadata management, develop- ment tools and enabling collaboration; • Information delivery, that includes: reporting, dashboards, ad-hoc query, Microsoft Office integration, search-based BI, and mobile BI; • Analysis, that includes OLAP (Online Analytical Processing), interactive visu- alization, predictive modeling, data mining and scorecards. Hagerty et al. (2012) analyze the vendors’ strengths and debatable aspects and place them into four subclasses (“leaders, challengers, niche players, and vision- aries”) of a quadrant with two dimensions: (a) ability to execute, and (b) com- pleteness of vision. The content of the quadrants is permanently evolving. In Gartner’s 2015, “ Quadrant for Advanced Analytics Platform”, 16 analytic and data science firms are analyzed with respect to 10 criteria and an updated quadrant is proposed. It contains: 126 4 Essential Enabling Technologies

• Leaders: SAS, IBM, KNIME, Rapid Miner; • Challengers: Dell, SAP; • Visionaries: Alterix, Microsoft, Alpine Data Labs; • Niche players: FICO, Angoss, Predixion, Revolution Analysis, Prognoz, Salford Systems, Tibco Software.

4.1.3.2 Evolution

Business intelligence gained traction in industry and academia in the 1990s. In order to highlight the key analytical part of BI, Davenport (2006) introduced the term business analytics. There are several key analytics technologies: [big] data analytics, text analytics, web analytics, network analytics, mobile analytics (Chen et al. 2012). The same authors offer an exhaustive academia-oriented view of the Business Intelligence and Analytics (BI&A) domain. They look at BI&A beyond the underlying data processing and analytical technologies and include aspects, such as business-centered methodologies and practices to be applied to various important application domains, such as market intelligence associated with e-commerce, health care, research e-government and security. The above mentioned authors propose a framework of BI&A domain and describe the key characteristic features and capabilities of each of the three evolution stages of the domain, as described in the sequel. BI&A 1.0, that was adopted by industry in the 1990s, is characterized by the predominance of structured data which are collected by existing legacy systems and stored and processed by RDBM (Relational Data Base Management Systems). The majority of analytical techniques were using well established statistical methods and data mining tools developed in the 1980s. The ETL (Extract, Transformation and Load) of data warehouses, OLAP (On Line Analytical Processing) and simple reporting tools are common aspects of BI&A 1.0. BI&A 2.0 is the next stage triggered by advances in Internet and Web tech- nologies, in particular text mining and web search engines, and the development of the e-commerce in the early 2000s. Text and web mining techniques associated with social networks, Web 2.0 technology, and crowdsourcing business practice allow making better decisions concerning both product and service offered by companies and recommended applications for the potential customers. BI&A 3.0 is a new stage characterized by the large-scale usage of mobile devices and applications such as iPhone and iPad (see Sect. 4.4). Another char- acteristic feature of BI&A 3.0 is the effective data collection enabled by the Internet of Things (Atzori et al. 2010; Gubbi et al. 2013). 4.1 Modern Data Technologies 127

4.1.3.3 Impact

It is beyond the purpose of this section to give an exhaustive presentation of the landscape of BI&A concepts, technologies and applications. Instead, we will pro- vide in the sequel several aspects that are relevant for the subject of this book, namely collaborative decision making (CDM). In 2013, a Gartner report on Hype cycle for BI&A (Schlegel et al. 2007, Schlegel 2013), CDM, together with Big Data are placed in the Priority Matrix in a class of technologies that will lead, in a period of 5–10 years to mainstream adoption, a transformational benefit. This means the technologies will “fundamentally change the way BI is consumed, in particular by increasing the speed at which information is delivered”. In the same documents, Sallam (2013) analyses several aspects of CDM, such as: (a) position and adoption speed, (b) business impact (benefits expected) and recommended actions. The author views the CDM platform a means that “combines BI and other sources of information used for decision making with social and collaboration capabilities and decision support tools and methodologies, to help knowledge workers make and capture higher-quality decisions. CDM brings the right decision makers and information together with decision tools and tem- plates to examine an issue, brainstorm and evaluate options, agree on a course of action and capture the process to audit and mine for best practices”. In the Gartner’s 2015 Hype cycle, that is meant to give the reader an idea of which of the analyzed technologies have a potential to become a part of our daily life, neither Big Data, nor CDM can be explicitly noticed. A possible explanation is the fact that the concepts and associated technologies are already into practice. There are, instead, other big data related technologies. It is expected that “the new class of Citizen Data Science to reach plateau in 2–5 years in the innovation trigger region”. A. Linden, the company research director, suggests “cultivating citizen data scientists who are the people of the business sector that possess some data skills, possibly obtained from a mathematics or even social science studies and use them to explore and analyze data”. In the next section we will examine the subject of data science (and scientists).

4.1.4 Towards a Data Science

Previous sections addressed various related technical issues, such as data-driven decision making and modern data processing, including Big Data concepts. They set the stage for presenting the trends to develop a new science and an associated class of jobs, namely Data Science and data scientist, respectively. Provost and Fawcett (2013)defined Data Science as “a set of fundamental principles and, broadly applicable, general concepts that support and guide the 128 4 Essential Enabling Technologies principled extraction of information and knowledge from data”. At the same time, Davenport and Patil (2012)defined data scientists as “the people who understand how to fish out answers to important business questions from today’s tsunami of unstructured information”. Their role is crucial in preparing the data-driven decision making activities in several application fields, such as marketing, online product and service advertising and recommending, customer relationship management, fraud detection, workforce management and so on, studying the consumers’ behavior within CRM (Customer Relationship Management) systems or in evalu- ating credit scores in the finance sector. Apparently, Data Science partially overlaps data-driven decision support in activities, such as intelligence and designing and evaluating of alternatives. Also, its definition may make one think about data mining. In fact, as it will be seen in the sequel, its concepts and principles define a more general class than the one which underlies the collection of data mining techniques. A list of main fundamental concepts to underlie, the principle extraction of information and knowledge from data of Data Science, was proposed by Provost and Fawcett (2013). These are the following: • It is necessary to plan a systematic process, in order to extract useful knowledge from datasets and avoid situations when solutions are offered to problems that were not carefully analyzed. In this respect, Wirth and Hipp (2000) recommend the Cross Industry Standard Process for Data Mining (CRISP-MD). Chapman et al. (2000) describe CRISP-DM 1.0 methodology in terms of a hierarchical process model which consists of sets of tasks placed on four levels of abstraction, such as: (a) phase, (b) generic tasks, (c) specialized tasks, and (d) process instance; • The obtained results should be considered in the context of their usage; • Information technology is recommended to be used to find useful data items within large datasets. This concept underlies various data mining techniques; • It is likely that entities similar with respect to known attributes are similar with respect to unknown ones and, consequently, similarity computation techniques are recommended; • To avoid the dataset overfitting syndrome, it is recommended not to look “too hard” at a set of data; • It is often possible to decompose the relationships between the business prob- lems and analytic problems into more tractable sub-problems; • A very close attention must be payed to confounding factors, possibly unseen ones, to draw correct causal conclusions; • Since it is quite unlikely that the problems come well prepared for the straightforward application of available tools, they should be decomposed into tractable sub-problems, provided that a mechanism for recombination of com- ponents existed. 4.2 Web Technologies 129

4.2 Web Technologies

4.2.1 The Concept

In his talk at the Laboratory of Computer Science at the 35th anniversary cele- brations, T. Berners-Lee (1999) stated that

The basic ideas of the Web are that an information space through which people can communicate, but communicate in a special way: by sharing their knowledge in a pool. The idea was not just that it should be a big browsing medium. The idea was that everybody would be putting their ideas in, as well as taking them out. This is not supposed to be a glorified television channel Also, everybody should be excited about the power to actually create hypertext. Writing hypertext is good fun, and being with a group of people writing hypertext and trying to work something out, by making links is a different way of working. In the Merriam Webster dictionary, the Web is defined as “a part of the Internet accessed through a graphical user interface and containing documents often con- nected by hyperlinks”. In order to see the difference between the Web (World Wide Web) and the Internet, we start from the Internet definition, which can be viewed as “a global system of interconnected computer networks that interchange data by packet switching using the standardized Internet Protocol Suite (TCP/IP)”. The Web, on the other hand, is defined as “an information space in which the items of interest, referred to as resources, are identified by global identifiers called Uniform Resource Identifiers (URI)” (W3C 2016). The Web was created in 1989 by Tim Berners-Lee at the European Organization for Nuclear Research (CERN) in Switzerland. It was defined as “a large hypertext database with typed links”. Later, Tim Berners-Lee implemented all the compo- nents of the Web in order to make it an usable tool, namely the HTTP (HyperText Transfer Protocol) protocol, the HTML language (HyperText Markup Language), the first World Wide Web browser, the first HTTP server, the first web server, and the first Web pages (Berners-Lee 1989). Over the years, the Web has evolved in terms of technologies, infrastructure, and number of users. The Web has significantly grown during the last years and, consequently, completely changed the way of communication, cooperation and collaboration between people and computers. It led to creating new human-machine interfaces, generating revenues for the economy, generating new jobs. It contributed to the development of new information and communication technologies and it completely changed the face of human society.

4.2.2 Particular Subclasses

In Aghaei et al. (2012), is presented the evolution of the Web from Web 1.0 to Web 4.0. The paper presented the fourth generation of the World Wide Web, as a Web of intelligence connections, the newest member of a series which started with Web 130 4 Essential Enabling Technologies

1.0, as a network of information connections, and continued with Web 2.0, as a network of people connections, and Web 3.0, as a network of knowledge connections. The World Wide Web generations can be viewed as particular subclasses of the general Web superclass. The evolutions are described in Aghaei et al. (2012)as follows. Web 1.0 is the first generation of the Web, which was a read-only web. It was created in 1989 by Tim Burners-Lee as a global hypertext space, where any information accessible on the network would be referred to by a single identifier. Web 2.0 is the second generation of Web, which was officially defined in 2004 by Dale Dougherty, the vice-president of O’Reilly Media. The Web 2.0 had many names. It was also called the wisdom Web, the people-centric Web, the participative Web, and the read-write Web. Rosen and Nelson (2008) have identified three key features of the Web 2.0 platform, namely: (a) enabling users to publish information without having significant technical knowledge, (b) social networking, and (c) vir- tual communities created around a specific topic. The deployment of Web 2.0 technologies created the concept of collaboration 2.0, which was presented in Sect. 3.1.2 of this book. In a paper about Web 2.0, O’Reilly (2005) described the Social web aasa collection of web sites and applications in which user contribution was the main driver for creating value. In media milieu, collected intelligence is creating by the people themselves who contribute to Wikipedia (containing articles), YouTube (videos), Flickr (tagged photos), Del.icio.us (bookmarks) and so on. Such aggre- gated contributions can enable and support an effective intelligence phase of the decision-making process model of H. Simon (see Sect. 2.1.1). Web 3.0 represents the third generation of Web, which is called also Semantic Web. John Markoff from The New York Times considered the Web 3.0 as the third generation of the Web in the year 2006, but the term Semantic Web was thought up by Tim Berners-Lee, the creator of the World Wide Web. He viewed the Semantic Web “not a separate web but an extension of the current one, in which information is given well-defined meaning, better enabling computers and people to work in cooperation” (Berners-Lee et al. 2001). Nowadays, the Semantic Web tools can effectively support the emerging requirements of life science research (Zhao et al. 2004). The most significant difference between Web 2.0 and Web 3.0 is that, while the first one is focused on content creativity of users, the second one, the Web 3.0, targets on linked data sets. While in the Social Web the driver of value was the collective contribution of people, in the Semantic Web the value is created by the integration of structured data from many various sources such as microprocessors, sensors, handy mobile devices and so on (Gruber 2008). He defined collective intelligence as “human–computer systems in which machines enable the collection and harvesting of large amounts of human-generated knowledge”.Grüber proposed a class of [killer] applications called collective knowledge systems, which unlock the “collective intelligence” of the Social Web with knowledge representation and reasoning techniques of the Semantic Web to get the genuine crowd wisdom (see 4.2 Web Technologies 131

Sect. 3.1.4). He illustrated the concept by Real Travel, a platform meant for trav- elers to share their experiences. They can contribute by describing their itineraries, telling various stories, providing photographs of the places where they stayed where they stayed, what they did, and providing recommendations for potential travelers. Web 4.0 is also known as the Symbiotic Web. As mentioned in Aghaei et al. (2012), the Web 4.0 will be the read-write-execution-concurrency Web. Regarding the Web 4.0, though there is not yet an established consensus about the tech- nologies to be used, one can foresee the web technology moving toward using neural networks, genetic algorithms and other artificial intelligence based tech- nologies in order to become an intelligent web.

4.2.3 Usages and Relevance to Collaborative Decision-Making

Fichter (2005) described how web technology paved the way for several on-line collaboration tools. Fundamentally, all tools offer three basic services: (a) an effective communication way (b) a mechanism to share documents, and (c) some means to discover other members of the community. As mentioned in Ivan et al. (2014), at present, there are developments in the field of web technologies that push the transition to a new generation, meaning new ways of data acquisition and processing, expanding the diversity of individuals that get access to them, the generalization of databases use, the development of multimedia applications, the use of mobile and auto-adaptive interfaces. When discussing about applications of web technologies in the field of e-collaboration, the collaborative aspect is essential. In Uţǎ et al. (2014), several arguments are provided as follows: • communication between developers, investors, and members of the target group is the only way to obtain a correct and complete definition of the problem that need to be solved, which satisfy each part, the antagonistic optimality criteria of the three parts being subject to aggregation processes that brings to a level that allows simultaneous satisfaction of their demands as accepted underway of all processes; • cooperation within the teams of developers and between the developers and users, respectively, with the investors, aims at producing those corrections or changes that are generally accepted in order to satisfy the highest demands of all parts, each part finally considering that it reached the target of its own efficiency; • coordination within each part is designed to enable the on of models for alternatives analysis of decision making, so that the uniqueness of each solution should be clearly denoted and the update processes to not greatly affect the period, quality and total costs. In Hirokawa and Poole (1986) it is highlighted the important role of commu- nication in the group decision-making process. Starting with the apparition of 132 4 Essential Enabling Technologies

World Wide Web in 1989, the Internet has played a very important role in com- munication and collaboration between human people. As pointed out by Bhargava et al. (2007), the web technologies have evolved very much in the last years and they have changed the way in which decision support systems are designed, developed, implemented and deployed, because modern decision support systems provide a very broad range of capabilities to their users. As mentioned in Nof (2007), the recent developments in the area of collaborative control theory influence the emergence of e-Work, e-Production and e-Service processes. The history of decision support systems covers a very small period of recent years and the con- cepts and technologies are still evolving. Certainly, the future of decision support systems will be different than the views, discoveries and research results seen in the past. In order to understand how web technologies can influence decision support systems in terms of development, implementation and deployment, we need to analyze the main tasks at various stages of using and building data and model-driven decision support systems. The web technologies can make possible to perform all of these tasks by using a remote web client (Bhargava et al. 2007). A web-based decision support system can offer timely, user-friendly and secure distribution of information for business development. Also, the web-based decision support system (DSS) can help managers, sales coordinators and customer support staff to access and analyze from any place information that is much easier and more widely available. The web can make possible the deployment of a decision support system at the enterprise level. Power (1998)defined the web-based decision support system as a “computerized system that delivers decision support information or decision support tools to a manager or business analyst using a thin-client web browser”. The computer server that is hosting the application of decision support system is linked to the user’s computer by a network using the TCP/IP communication protocol. Power and Kaparthi (2002) presented the architecture of Web-based decision support systems and their advantages and disadvantages. The authors describe also the most relevant examples of web-based decision support systems implementations. Bharati and Chaudhury (2004) conducted few empirical studies in order to understand factors that influenced the decision-making satisfaction in Web-based decision support systems. Wang and Cheng (2006) introduced the concept of Web-based spatial decision support systems (SDSS), which can enable and stimulate the public access and involvement in inter-organizational collaborative decision-making. In the inter-organizational decision-making process, collaboration can be used to resolve conflicts and reduce uncertainty. In Rasmussen et al. (2007), it is defined the Web-based collective intelligence system in order to support decision-making process in very large networks of stakeholders. A web based Group Decision Support System (GDSS) meant to be used by small collaborating teams in financial classification problems was proposed by Rigopoulos et al. (2008a, b), Rigopoulos (2015). It can be integrated in the Business Intelligence system of the company and uses a multicriteria algorithm to aggregate the individual preferences of partici- pants. The aggregation of individual preferences is carried-out by using WOWA 4.2 Web Technologies 133

(Weighted OWA) method. WOWA extends the classical multi-attribute OWA (Ordered Weighted Average) method by weighting sources beside values of the attributes. In Brandt et al. (2015), it is described Pnyx, a powerful and usable information tool for preference aggregation. The system is a completely web-based application, which was developed in Python programming language (Sanner 1999) and does not require any prior knowledge about the social choice theory (see Sect. 3.2). Section 3.1.4 of the book presented in detail the concept of crowdsourcing as a new web-based business model meant to enable finding creative and effective solutions from a very large group of people. The web can provide information which supplements the content of invitations sent to the crowd. In addition, con- tributions of the crowd members can be comfortably uploaded onto the web. As mentioned by Power and Phillips-Wren (2011), in recent times, the Web 2.0 technologies have significantly impacted the design and implementation of decision support systems, especially the mobile DSS. Nowadays, the Web 2.0 applications are more efficient than the first generation of web-based DSS applications. One must take into consideration the essential role of Web 3.0 or Semantic Web in the context of collaborative decision-making process. The Semantic Web was intro- duced in 2001 as “a new form of Web content that is meaningful to computers” (Berners-Lee et al. 2001). As specified by Kück (2004), the Semantic Web has a tremendous potential to solve true issues in the communication between devices, such as finding, sorting and classifying the information, but in order to achieve this objective, it is necessary to understand that the power of Web 3.0 is more adequate to some specific sorts of information than it is to other types. Using ontologies in the collaborative portals will make possible to determine particular term vocabu- laries with which the system works, so that the system can offer effective cus- tomized solutions for the context and user categories, taking into account the natural language through which the communication is accomplished. By having available the knowledge-base, populated progressively in real time, using genetic algorithms and neural networks, the collaborative web-based decision support system is able to deliver the best solutions to the users. The development of the new generation of decision support systems is associated with building rich internet applications (RIA) (Fraternali et al. 2010), by using programming languages and tools, such as AJAX, Adobe Flex, and other similar tools that offer the features and functionalities of traditional desktop applications (Power and Phillips-Wren 2011). The human resources play a very important role on the collaborative system behavior in the knowledge-based society. The collaborative systems of the knowledge-based society require a complete upgrade to take the advantages offered by new technologies. This kind of systems requires very large databases, which contains complete and correct data. By applying data mining algorithms to collect important data, this will lead to transformations of information into knowledge (Peng et al. 2008), which is used to structure and order the systems examined. In order to achieve the qualitative and quantitative assessments of web technologies and social networks applications in the area of e-collaboration, the evaluation of users’ behavior in such applications should be realized. 134 4 Essential Enabling Technologies

It is worth remarking that web-based applications in the context of a collabo- rative decision support system differ one from the other in the complexity of their components. The integration of the outcomes provided by the collaborative deci- sion support system help to automate the actual operations carried out by the system, but also to offer strategic and operational data necessary in the decision-making process. The metrics contributes to realize a quantitative evalua- tion of the collaborative systems from different economic areas. To assess a col- laborative system, several indicators must be implemented and evaluated from the perspective of the accompanying properties: sensitivity, not a compensatory char- acter, not a catastrophic character, representativeness (Ivan et al. 2014). In Uţǎ et al. (2014), is considered that all the software products and their components are subject to storage in a computer network, when we speak about processing in computers network and they must be regarded as virtual entities. The basic virtual entities are those that, once stored, are subject to use by members of the target group, in order to solve problems. Based on some input data, after processing they lead to final results. The software products for management of organizations, namely web applications, social networks, video archives, digital libraries, and many other pieces of computer software in operation, represent collections of basic virtual entities. In Bhargava et al. (2007), the integration of web-based decision support systems is viewed as a pressing need, because the standalone systems are becoming less useful. Their data inputs and outputs need to be integrated into the overall orga- nizational architecture and this integration process is a big challenge in the context of web-based decision support systems, especially in the case where the web servers are outside the organization. If such integration is missing, this can significantly decrease the adoption of the decision support technologies, even if they are web-based or not.

4.2.4 Standards

In the World Wide Web landscape, the standards are technical specifications that define and describe aspects of the Web. The standards in the Web refer also to a set of standardized best practices and a philosophy of web design and development for building web sites. Many of these best practices were introduced by W3C, the consortium that is the main international standards organization for the World Wide Web (https://www.w3.org/Consortium/). The main standards for the Web are rec- ommendations defined by W3C and by the standards published by the International Organization for Standardization (ISO). There are also other standards, defined by Web Hypertext Application Technology Working Group (WHATWG), Request for Comments (RFC), Ecma International (formerly ECMA), and Internet Assigned Numbers Authority (IANA). Wang et al. (2005) stated that, in the field of biological data representation and management, it is necessary more than the XML (Extensible Markup Language) 4.2 Web Technologies 135 and the RDF (Resource Description Framework) standards for data representation in the context of semantic web technologies. In Rutter et al. (2007), it is highlighted the role of Web accessibility, which means ensuring that a given page on the Web is able to be accessed. The accessibility characteristic relies on aspects of Web standards which are related. In Cederholm (2009), it is shown how to put Web standards to work to create beautiful, meaningful and lightweight interfaces, which are accessible to all categories of users. In Sivashanmugam et al. (2003), one can find an approach of developing semantic web services by using the UDDI (Universal Description, Discovery and Integration) standard (Newcomer 2002)to store semantic annotations and search for Web services based on them.

4.3 Social Networks

4.3.1 The Concept

Social networks represent structures formed by groups of actors and their interac- tions in a virtual environment. The concept evolved due to the progress of Web technologies (see Sect. 4.2), to fulfill the necessity of human people to communi- cate and cooperate in a virtual environment. The social network concept appeared in the late 1800s, when É. Durkheim and F. Tönnies developed the idea of social networks in their theories and researches about social groups (Durkheim 1893; Tönnies 1887). Due to combination of computer networks with social networking software appeared a new environment for social interaction which is called social media. Kaplan and Haenlein (2010) have defined the concept of social media as “a group of internet-based applications that build on the ideological and technological foundations of Web 2.0, which allows the creation and exchange of user-generated content”. Under the general name of social media, one can identify various par- ticular subclasses, including discussion forums, online groups, web blogs, social multimedia (for music, still pictures or video) sites, social games or virtual worlds (Chen et al. 2012). Power and Phillips-Wren (2011) have identified six unique types of social media, namely: (a) collaborative projects, (b) blogs and microblogs, (c) content communities, (d) social networking sites, (e) virtual game worlds, and (f) virtual social worlds. The impact of social media on collaborative decision-making will be presented later in the Sect. 4.3.3. In Doreian and Stokman (2013), it is presented the evolution of social networks that have increasingly become the priority of many social scientists and a lot of powerful network analytic tools have been created and applied to a large variety of empirical phenomena. At present, social networks are largely used and accepted. For example, in Fortune (2016), it is presented an interview of Pope Francis, where the Pope states that the Internet and social networks are a “gift of God”, if they are used wisely. Pope Francis also stated that “the Internet could be used to build a society which is healthy and open to sharing”. 136 4 Essential Enabling Technologies

4.3.2 Particular Subclasses

In Kumar et al. (2010), a series of measurements from two real networks are described, including the Flickr social network, which is a photo sharing platform, and the Yahoo 360. Based on the results obtained, the authors classified these social networks into three regions (Fig. 4.1): • Singletons, that are not active participants of the social network; • Isolated communities, which are represented in a star structure; • Giant component, which have a very well-connected core region that persists even if the stars are missing. In Cheung et al. (2011), it is highlighted the growth and popularity of online social networks that have created a new world of communication, cooperation and collaboration. In a global social network, like Facebook, there are more than a billion of people around the world that are inter-connected, collaborate and con- tribute together to create their knowledge. Koch et al. (2015) remark that while CSCW (see Sect. 3.1.1) have typically focused on groups of ten, and Group Support Systems and Collaborative Engineering (see Sect. 3.4) have focused on

Fig. 4.1 Social network classes (Kumar et al. 2010) 4.3 Social Networks 137 groups of tens-to-hundreds, Social Computing often involves tens-of-thousands to millions of people. There are many social network platforms available on the Web, having different target users and topics. The general superclass of Social Networks can be partic- ularized as several subclasses The most known social network websites and the ones that have great number of registered users are the following (Alexa 2016): • Google+, with more than 1.6 billion registered users; it is the social network website managed by Google; • Facebook, with about 1.3 billion registered users; it is the most known social network with focus on photos, videos, blogs and apps; • Twitter, with more than 645 million registered users; its focus is on RSS feeds and micro-blogging; • Instagram, with more than 300 million registered users; it is a photo and video sharing site; • LinkedIn, with more than 200 million registered users; it is a business and professional social network; • Flickr, with more than 32 million registered users; it is a photo sharing, com- menting, photography related networking; • Research gate, with 9 million registered users; it is dedicated to science and technology. Figure 4.2 shows a representation of the main social networks platforms and their number of registered users, as presented by Alexa (2016). One can notice that the most known social networks represented are sharing a part of their users,

Fig. 4.2 The most known social networks (Alexa 2016) 138 4 Essential Enabling Technologies because a user that is registered on a social network platform is also member of other social networks communities. There are other social network platforms that have attracted the people’s interest, but they do not have large communities of users as the platforms mentioned above. Some of these social network platforms disappeared due to migration of users to new platforms and communities. It is not the case of the Research gate (https:// www.researchgate.net/).

4.3.3 Usages and the Relevance to Collaborative Decision-Making

Social networks, as Facebook, Twitter, LinkedIn and so on, have a big impact on e-collaboration, due to the rapid growth of social communities and applications. In the case of social networks, collaboration takes place when two or more individuals are cooperating in order to create or to develop the same thing, meaning that they have the same objective. In this case, a collaborative system can be defined as an organizational unit, which is built up each time when collaboration happens, whether if it is formal or casual, purposeful or unexpected. The group decision-support system is a collaborative means that supports the 3C (communication, coordination and cooperation) facets of collaboration (Fig. 4.3). In a collaborative decision making system, there may be also many intelligent agents (Padgham and Winikoff 2005) that work together in order to accomplish the system objectives. In Ciurea (2010), it is considered that the agents within a col- laborative system are represented by any entity interacting or exchanging data in the

Fig. 4.3 The 3C of collaboration 4.3 Social Networks 139 system, be they people or software applications. Every agent’s roles can be seen as a group of processes that an agent can execute. A process is one of the ways to use the collaborative system. The whole set of available processes define what the collaborative system is used for and demand a specific configuration. One of most important characteristics of agents is the mobility. Starting from this characteristic, the agents are classified in stationary agents and mobile agents. The mobile agents are the entry points in defining and developing mobile tools and applications. Considering the above statements we can derive the following characteristics of the intelligent agents in a collaborative decision making system: • globalization, meaning the unification of objectives, features and specific activities; • collaboration, meaning the communication, coordination and cooperation between different agents; • self-organization, meaning the auto-management of own activities and resources. Each of the characteristics of the intelligent agents is very important in order to realize and efficient cooperation and collaboration. If one of the characteristics is not correctly satisfied, then the collaborative decision making system will not work properly. The development of web technologies and social networks in the field of e-collaboration led to several models of collaborative systems applied in the real life, such as the medical field, banking and education. In Freeman (1978), it is introduced the concept of centrality in social networks It comes from the graph theory and means that the centrality of the whole social network should show the tendency of a single point to be “more central” than all other points in the network. Kijkuit and van den Ende (2007) have developed a dynamic framework using findings from creativity and behavioral decision-making literature, in order to highlight the role of social networks from idea generation to selection, which are important procedures in collaborative systems (see Sect. 3.4). As presented in Power and Phillips-Wren (2011), the social networks and Web 2.0 technologies can influence positively and negatively the rationality and effec- tiveness of the decision-making process. The information and communication technologies have become increasingly powerful in the last years and they will continue to provide new opportunities and challenges for future developments. In Song (2015), it is presented the concept of off-line social networks with their own unique characteristics. When dealing with social networks, we must take into consideration the term of social media. As Power and Phillips-Wren (2011) mentioned, the social media influenced the decision making process, by creating more connections, in order to receive infor- mation and opinions from the network participants. This is true, in particular, for the intelligence phase of the process model of decision-making. The impact of social media on the decision making process will certainly grow in the near future, 140 4 Essential Enabling Technologies because new web technologies, starting with Web 2.0, increase the rationality and effectiveness of the decision making process, in some cases, but will negatively impact the decision-making process in other cases. As the above authors foresee the impact of social media on personal and managerial decision making activities can be extensive, because it is changing the opinions and influencing the personal choices by impacting decisions of consumers and business decisions of managers. A negative potential effect of social networks is a saturation effect that can influence the decision makers, namely “the communication overload experienced by group members in centralized positions in communications networks” (Shaw 1976). In a survey made by Bulmer and DiMauro (2010), it is considered that one of the key findings on social media’s impact on business and decision-making process is that “connecting and collaborating are key drivers for professionals’ use of social media”. As presented in Camarinha-Matos and Macedo (2010), the decision-making process in a collaborative and social network is influenced by the common value system of the network and by the individual value system of each member of the network. In this case, the identification and characterization of these value systems is very important, when trying to improve the collaborative processes. Because mem- bers can have different value systems, they might have different perceptions on the outputs, which can conduct to non-collaborative behavior and inter-organizational conflicts. Social networks are very important in supporting the collaborative decision- making, because people need to interact and exchange opinions, in order to take decisions. Social networks involve a virtual environment where participants can collaborate in the decision-making process.

4.3.4 Standards

In the social networks area, there were defined some open standards regarding the access inside the social network platform. For example, Facebook users can communicate only with other Facebook users, but someone who does not have a Facebook account cannot access photos, videos, comments or messages posted on Facebook. On the other hand, Google+ does not restrict the access to the content created in a circle of friends and allow that this content to be shared via email between members of that circle. In this context, we can discuss about social net- works that are open for everyone who want to access them and social networks that restrict their content only to their communities of users. When discussing about social media, there some metrics have been defined in order to make the transition to specific standards. Some of these metrics can be controlled and they are represented by the number of likes (which got recently new nuancing forms), number of followers of the web traffic. Some other metrics are uncontrollable, such as voice sharing and messaging. In the case of social media 4.3 Social Networks 141 there is no single metric and the measurement is focused on conversation and communities, not just coverage (Marklein et al. 2011). A draft version of Social Media Measurement Guidelines was published on September 15, 2015, by a number of representative organizations in the field and is open for public comments. This guideline includes specific social media activity measurements, such as tracking of users accessing content and user generated content analysis (WWW 2015).

4.4 Mobile Computing

4.4.1 The Concept

Mobile computing is the result of the convergence of two major pacing technology domains, namely wireless communications and computer networking. It may be viewed a new trend in the movement of creating a mobile Internet meant to enable the connection of people wherever they are without the need for land wiring. Raychaudhuri and Mandayam (2012) named mobile Internet “a second wave of wireless revolution”. There are several influence factors that have contributed to raising mobile computing to the status of a new paradigm and a set of technologies. The most relevant ones are: • Computer increased performance and continuous miniaturization; • New business models that assume making more and more people perform on-the-go collaborative tasks, including decision-related ones, such as: real-time data collection, choosing a solution or implementation and evaluation of the impact of the decision; • The ever increased contribution of the third party application software devel- opers combined with the strategic initiatives of the big companies that produce wearable devices and sponsor their platforms and associated application marketplaces. It is worth remarking that not all wireless networks support mobility and there are situations when several mobile applications use a broad spectrum of commu- nication technologies beside wireless systems (Raychaudhuri and Mandayam 2012). Mobile computing can be represented as an aggregated (also named parent or container) class composed of three main constituent (or part) classes. • Wireless network infrastructure (WNI); • Mobile devices (MD); • Mobile application software. 142 4 Essential Enabling Technologies

Each of the above part classes can be subsequently viewed as a general (or base) superclass of entities that can be further particularized as a set of several specialized (or derived) subclasses.

4.4.2 Classes and Subclasses

4.4.2.1 Wireless Networks

It is largely admitted that the [modern] history of the wireless networks started at the end of the nineteenth century with the communication experiments performed by Marconi (Corazza 1998) to transmit and receive radio wave signals over long distances. Since then the concepts and technology have evolved. A comprehensive and informative history of wireless can be found in the book edited by Sarkar et al. (2006). The story starts in 2637 BC with the Chinese emperor Huang-Ti, who was able to pursue his enemies by using his magnetic compass (Sarkar et al. 2006, p. 1). In recent times, one can notice a rapid advance of wireless technology associated with the highly increased number of users (Shim et al. 2006; Dekleva et al. 2007; Hanzo et al. 2012; Sharma 2013; Chilamkurti et al. 2013). From the perspective of the infrastructure deployment, the general superclass of wireless networks can be particularized into four main subclasses as follows (Valacich et al. 2015): Wireless Personal Area Networks (WPAN), which are based on the well-known Bluetooth standard (Lee et al. 2007). They were initially conceived to connect various digital computer peripherals placed in a few meter reach around a desk computer; Wireless Local Area Networks (WLAN), that are the radio wave-based replicas of the largely and wired LAN. The WLANs are based on the Wi-Fi (Wireless Fidelity) standards and represents a cheaper and more flexible solution than the traditional wired LAN. The connectivity to the access points of a WLAN is possible in a hotspot, an area of a limited size; Cellular Networks (CeN), that are conceived in accordance with the idea of creating a hierarchical structure composed of local networks which cover well defined, limited-size geographical areas (the cells) of a regular shape, in general hexagonal one. Within several cells the same frequencies can be used, but not in adjacent neighboring cells to avoid channel interferences. In each cell there is at least one fixed-location transceiver, called base station. The traffic is controlled by a switching system conceived by A.E. Joel of Bell Labs. Though the area covered is wider than WLAN and the energy consumed for transmission is lower than in the satellite networks (see below) case, there are also several limitations of the cellular networks such as: 4.4 Mobile Computing 143

• The poor coverage of the sparsely populated areas; • The possible usage of different communication protocols in different covered areas, such as: GSM (Global System for Mobile Communications) thoroughly used in EU, FDMA (Frequency Division Multiple Access) and CDMA (Code Division Multiple Access) used to distinguish signals received from several different mobile or fixed devices. This may create incompatibility problems when the user enters an area where the cellular network uses a certain protocol which differs from the one used in the area he/she left. CDMA2000 (C2K or IMT Multi–Carrier) is a family of 3G (3rd generation) mobile technology standards for sending voice, data, and performing data signaling between mobile phones and cell sites. At present, the usage of a common standard called Long Term Evolution (LTE) is meant to solve possible incompatibility problems in the 4th generation of networks (Rumney 2013). In recent years, one speaks about the generations of networks. Table 4.2 syn- thetically presents several characteristic attributes of the past, present, and expected generations (Sharma 2013). From the perspective of the infrastructure deployment, the general superclass of wireless networks can be particularized into four main subclasses as follows (Valacich et al. 2015): Satellite networks (SatN) that use constellations of geostationary low or medium orbit satellites of Earth to ensure a global coverage for data and voice transmission (Richharia 2014). The most known SatN are: • The British geostationary Inmarsat satellite system, which was initially con- ceived, in 1979, as a maritime communication system (Ghais et al. 1987). It is now also used for areas where there are no landlines, or they are out of reach for cellular networks, or marine VHF (Very High Frequency) radio stations;

Table 4.2 Main characteristic attributes of mobile network generations (Sharma 2013) Generation Attribute Time Transfer rate Technology Service provided Multiplexing period (bps) used 1G 1970–1980 2K Analog Mobile telephony FDMA cellular (voice) 2G 1990–2004 64K Digital Digital voice, TDMA, cellular SMS, data CDMA 3G 2004–2010 2M CDMA 2000 Integrated HQ CDMA audio, video, data 4G Now 1G WiFi, Dynamic info. CDMA WiMax, LTE access 5G 2020+ >1G wwww Dynamic info CDMA access AI capabilities on devices nG, nth generation; bps, bits per second; F/T/CDMA, Frequency/Time/Code division multiple access; CDMA2000, a family of 3G mobile technology standards 144 4 Essential Enabling Technologies

• The Iridium satellite constellation system, which was set up in 1998 and is based on 66 active Low Earth Orbit (LEO) satellites in orbit. It also includes other additional spare satellites to be used in case of failures (Maine et al. 1995).

4.4.2.2 The Wearable Intelligent Devices

One of the earliest mobile communication systems was the 40 kg terminal used on the Swedish Mobiltelefonisystem (MTA) and conceived by Ericson (Billström et al. 2006). MTA was an automatic system operated in the 160 MHz frequency band by the Swedish telecom company Telia. It was introduced in 1956 and had only 125 subscribers in Stockholm and Göteborg. Several years earlier, in the late 1940s, a manual system named Mobile Telephone Service was introduced in USA by AT&T to serve one hundred towns and highway corridors. The call subscriber equipment weighed about 36 kg. In 1982, the Compaq Portable was released as a more affordable and convenient alternative to IBM PC (Dahmke 1983). The computer, which looked as a sewing machine, costed around 3,000$, weighed 13 kg and had a 51/4 inches disk drive with 360 Bytes storing capacity. Since then the computers have become more powerful and smaller in sizes and the communication terminals lighter and more intelligent. At present, one may decompose the general superclass of mobile intelligent devices into several par- ticular subclasses as follows (Valacich et al. 2015). The currently available devices that are relevant with respect to the subject of this book are as follows: Laptops, that are the portable, functionally similar replicas of the fixed desktop computers. If utilized in on-the-go manner, their use may be limited by the battery life. Handheld Tablet Computers, that use the multi-touch screens to interact in a very convenient way with the content displayed. They can be very effective in solving practical problems by using the easily installable third-party application programmes. The most widespread tablet computers are Apple iPad, introduced on the market in 2010, Microsoft Surface and Samsung Galaxy. The tablet is perhaps one of the most representative devices in the “beyond the PC” movement (Murphy 2011). Smartphones, that represent a class of very portable devices characterized by an “all-in-one” functionality. A smart phone can be used as a camera, possesses a large solid-state memory to store images, still pictures and videos, GPS (Geographic Positioning System) navigation unit, and a broad spectrum of wireless connectivity including Bluetooth, Wi-Fi, cellular systems. As in the case of tablet computers, third-party application programmes are available and can be easily downloaded an installed on smartphones. Though their batteries have a life, which is longer than the one of the tablet, their reduced processing power and the small sizes of the screen limit, to a certain extent, their usage as surrogates for desktop computers. 4.4 Mobile Computing 145

The best known products on the market are: Apple iPhone, introduced on the market in 2007, Samsung Galaxy Nexus, and Nokia Lumia. Even though mobile devices have some minor disadvantages, as it will be presented in Sect. 4.4.3.2, is considered that mobile technologies have seen a huge market share increase in the recent years.

4.4.2.3 Software

There are two main particular subclasses of the general class of the software tools that run on mobile devices, namely operating systems and applications. The dominant operating systems and platforms are: • Google’s Android, that is based on Linux operating system. It was developed by Google together with the Open Handset alliance (Meier 2012) and released in 2008. The name Android designates an open access platform, software stack composed of an operating system, middleware and application software (called, for short, applications). At the lowest layer there is the Linux kernel v. 2.6 and higher; • Apple’s iOS, which was released in 2007. A useful comparison of iOS and Android is provided by Goadrich and Rogers (2011); • Microsoft Windows Phone, which was released in 2010. There are several other software products on the market such as: Symbian of Nokia, BlackBerry OS of Black Berry Ltd, webOS of HP, Bada of Samsung Electronics, and so on. According to a recent study published by Gartner (http:// www.gartner.com) in 2015, the main mobile operating systems found on mobile devices are as follows: Android with a market share of 82.2 %, Apple iOS with 14.6 %, Microsoft Windows Phone 2.5 %. The subclass of application software can be further particularized. There are two subclasses: Proprietary Applications, that are conceived and maintained by manufacturers to run on specific (group of) devices. Third-party developed Applications, that are meant to be easily installed and used on tablet computers and smartphones (Wasserman 2010). The rapidly growing number of applications represents a prominent characteristic feature of mobile computing. The applications are organized in marketplaces (such as Apple’s App Store and Google Play), so that the potential users can perform searches, make choices in accordance with their perceived needs and download the selected pieces of software on their devices. The third-party application developers are provided by manufacturers with application programming interfaces (API), so that they could be effective in making full usage of the operating systems and devices capabilities. In recent years, one can notice a multi-homing syndrome (Hyrynsalmi et al. 2012), which consist in placing an application on several platforms. The application marketplaces are maintained by the sponsors of IT platforms in different ways. 146 4 Essential Enabling Technologies

While App Store is a rather well controlled marketplace, where the third-party applications need approval and the devices must be Apple products, Google Play, the marketplace of Google’s Android platform, is an open one where the developers can make use of an open source operating system and place freely their applications (Valacich et al. 2015). The applications are not checked for quality. Consequently, App Brain, the site for rating and recommendation, reported that 33 % of appli- cations were rated “low quality” by users (Kelley et al. 2013).

4.4.3 Usage and Relevance to Collaborative Decision-Making

In the context of the decision-making process, mobile technologies gain new meanings, because all the situations that were possible in a classical format, using desktop technologies, are changing now due to this kind of migration. As shown by Benedict (2012), the mobile technologies have created a new kind of business models, they allowed new ways of selling products and services, new ways of doing marketing, new ways of delivering. Mobile pieces of application software provide the main advantage that one can be accessed anywhere and anytime. At present, a significant number of companies have many branches in different locations around the world and their employees have to travel from a location to another, in order to fulfill their tasks. They are using mobile applications, in order to access resources from different locations (Ciurea 2010). When dealing with mobile applications, an important characteristic feature is the degree of integration with other applications, especially with social networks. The new generation of mobile applications can be developed to automatically collect personal information of users from social plat- forms presented in Sect. 4.3. This feature is useful when creating user accounts in the mobile applications, because users will have the possibility to automatically connect with their social media accounts (Ciurea and Pocatilu 2012). Another important feature of mobile applications, offered this time by Twitter, is given by the possibility to post questions or give answers to different problems using directly the tweet option offered by this social network. Some representative smartphones producers, like Apple, have integrated the Twitter social network capabilities directly in the operating system, so that users can “tweet” using the menu options of their smart-phones. Excepting the facilities offered by the social networks, in order to use the same credentials for authentication in mobile applications to access sensitive content, the most important feature is represented by collaboration. The users can collaborate with their colleagues from Facebook when learning or taking a decision, so that the collaborative learning and collaborative decisions become very effective. The applications of mobile computing are numerous and diverse. One can classify them in accordance with a domain of human activity (Dinh et al. 2013). The classification of applications is made according to the content of the human 4.4 Mobile Computing 147 activities they enable and support. It will be presented in the sequel by adopting and adapting the taxonomy of enterprise mobile application proposed by Unhelkar and Murugesan (2010).

4.4.3.1 Subclasses

The general class of mobile usages may be viewed as a superclass of a set of more particular subclasses, as follows. M(obile)-Broadcast subclass, that consists in sending useful and relevant pieces of information to registered or casual, unregistered users, who are placed in a certain location at a certain moment. Orders to implement a decision or directions for actions to be performed in an emergency situation illustrate the case. This may be called broadcast push (Sharaf and Chrysantis 2004). There are also numerous other possible applications. A particular case consists in sending invitations to the members of crowds, to contribute to solving a particular hard problem (as is pre- sented in Sect. 3.1.4) by using smartphones (Chatzimilioukis et al. 2012). M(obile)-Information subclass of applications, that consists in providing the requested information which is necessary for decision-making in a broadcast pull scheme (Sharaf and Chrysantis 2004). The authors describe a solution based on mobile devices on which OLAP (Online Analytical Processing) tools are installed with a view to enabling the client to issue queries to a base OLAP server and download the resulting reports. In general, such information should meet the requested levels of usability and privacy. A more common case consists in sending the outputs of a recommender system (see Sect. 3.1) to a user who intends to select a product or a service (Wietsma and Ricci 2005). M(obile)-Transaction subclass, which groups those applications that facilitate the execution of transactions, such as releasing orders which result from a decision-making process or making electronic payments for the work carried out by data feeders or the experts consulted during various phases of the decision-making process. Such applications should ensure trusted services and be characterized by higher quality indicators with respect to security, responsiveness and unambiguity. M(obile)-Operation subclass of applications, that are meant to facilitate oper- ational activities within the enterprise. They must work in real time and be inte- grated in the overall information system of the enterprise. M(obile)-Collaboration subclass, which contains those applications meant to support collaborative activities performed within a department, enterprise or group of enterprises, together with suppliers and clients. Several applications to run in the content of social networks can be placed within the subclass. For example, Perez et al. (2010) present a prototype of a Group DSS (see Sects. 2.3.1, 2.3.4.1 and 3.1.3) in which fuzzy models are used to represent changing preferences of a group of experts who reach the consensus in a dynamic process supported by mobile com- munications. Social networks represent a major candidate for using mobile com- puting to enable effective collaborative activities (Chang et al. 2007). 148 4 Essential Enabling Technologies

4.4.3.2 Advantages and Open Problems

Mobile communications and computing got traction in the latest two decades, due to two main properties (Looney et al. 2004; Valacich et al. 2015): • Ubiquity, that means the information and data processing equipment are apparently available from any place at any time; • Localization, that means the information provided to the user can be tailored to the perceived characteristic attributes of his/her physical location. There are several derived advantages (Valacich et al. 2015; Unhelkar and Murugesan 2010; Perez et al. 2010): • The needed up-to-date information can be provided with no delay; • Possible group meetings for decision-making or negotiation can technically be organized in a flexible way at any moment without any need for group members to travel or be located in special decision rooms (see Sect. 3.1). Decision makers, their assistants, data feeders, and invited experts can move freely and possibly carry out other activities in parallel. One should, however, notice the other side of the coin: some people might not accept to be called for decision activities, but in time intervals that are available or willing to; • The right decision makers and relevant experts and assistants can be found and called by a location based service (LBS). Though LBS is an effective and, in most cases, convenient service, sometimes disclosing the location, might be undesirable and even dangerous. To protect the person or/and his organization there are available methods, such as: anonymization, pseudonymization, query obfuscation, or sending dummy queries (Shokri et al. 2014). There are still other open questions and limitations that prevent using mobile communication and computing to the desired extent (Perez et al. 2010; Wang et al. 2012; Ciurea and Tudorache 2014; Brandas and Didraga 2014): • Usability issues concerning: (a) the speed of services which depends on the quality of the wireless link; (b) the reduced storage and processing power, (c) the small size of the screen associated with the lack of multiple windows and the possibility of theft or loss; • Interoperability issues caused by heterogeneity of mobile devices and standards; • Security issues caused by low quality of applications or/and attack of unau- thorized persons; • Ecological issues concerning the power consumption; The phablet (Segan 2012; Chi and Lai 2015) could be a solution for the problem of screen small size. Mobile cloud computing, that will be presented in the next section, is meant to overcome other problems, such as: battery short life, energy consumption and limited storage and processing power of the mobile devices. The 4.4 Mobile Computing 149 possible unsatisfactory information exchange among the members of the group who have to solve complex or sensitive decision problems can be compensated by the use of complementary technologies, such as video-conferences (Gray et al. 2009; Bekkering and Shim 2006). Biometric methods to handle the security issues are addressed in Sect. 4.5.

4.4.4 Mobile Cloud Computing

Information and communication technology (I&CT) sector is a serious energy consuming and environment polluting one. Based on the data available in literature, Wang et al. (2012) stated that the I&CT sector was accountable for 2 % of the global energy consumption and associated with 3 % of the total volume of CO2 emissions. More than half of the I&CT sector energy consumption is attributed to wireless networks and mobile devices. Consequently, the energy consumption is a major subject of concern of the manufacturers, planners and users of mobile technologies. There are still other subjects of preoccupation concerning the new technology, which were presented in Sect. 4.4.3.2. The most relevant for the topics of this section are: (a) the limited data storage capacity and processing power, (b) the low reliability of the mobile devices, (c) the finite lifetime of the batteries, (d) heterogeneity and availability of the wireless network, and (e) security of communications. All the above problems are intrinsic deficiencies and collectively represent a serious obstacle to the usage of the mobile technologies to the full extent possible. Cloud computing is a concept, a business model technologies and a set of technologies. It basically implies that the applications are delivered on demand, as services, by a cluster of information technology entities (hardware and software) in a similar way other public utilities, such as gas or electricity, are offered in a “pay– per-use” business model. Foster et al. (2008)define Cloud computing as:

A large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet. Ergu et al. (2013) propose AHP (Analytic Hierarchy Process) for task scheduling and resource allocation in cloud computing. There are many applications of Cloud computing (Narasimhan and Nichols 2011; Petcu et al. 2013; Stanescu et al. 2015a, b). In the next section, we will examine the combination of Cloud computing with mobile wireless technology, which represents a possible solution to overcome the inherent problems of the current mobile devices offloading data storing and computing tasks to remote resource providers (Mehta et al. 2013; Fernando et al. 2013). 150 4 Essential Enabling Technologies

4.4.4.1 The Models

Mobile Cloud Computing (MCC) is defined by MCC Forum as

An infrastructure where both data storage and data processing happen outside of the mobile device. Mobile cloud applications move the computing power and data storage away from mobile phones into the cloud, bringing applications and mobile computing to not just smart phone users, but a much broader range of mobile subscribers (Mehta et al. 2013; Dinh et al. 2013). The most commonly used model of MCC assumes a sequence of basic opera- tions as follows (Dinh et al. 2013; Mehta et al. 2013): • The mobile devices, such as laptops, handheld tablet computers, smart phones and so on send their requests to services of the wireless network; • The network operators perform services, such as authentication, authorization and accounting (AAA) based on available subscribers stored data; • The requests are sent via Internet to the cloud controllers of data centers which contain the cloud service providers. There are also other models of the mobile cloud computing reviewed by Fernando et al. (2013): • Peer-to peer scheme, which assumes that other mobile devices in the network are resource providers; • The cloudlet concept, which is based on the idea of transferring work offloads from mobile devices to an intermediary local entity composed of several multi-core computers connectable to remote cloud servers.

4.4.4.2 Advantages and Limitations

The combination of the “two technologies” contributes to overcoming the above mentioned obstacles of the usage of the mobile technology. At the same time, it extends the utilization of cloud computing. In MCC, the services of the cloud can be demanded by mobile devices, such as smart phones, computer tablets and so on, in addition to fixed devices, such as PCs. There are numerous and various applications of MCC such as: m(obile)-com- merce, m-learning, m-banking, m-healthcare, m-gaming, voice, video, and exper- tise searching services (Dinh et al. 2013; Ciurea 2010; Pocatilu et al. 2013). It is worth remarking that MCC enables a new, more flexible, effective and convenient working style of the participants to decision-making activities. MCC enables extending the set of real-time potential “data feeders” in the intelligence phase (see Sect. 2.1.1). Also, the most appropriate experts can be located (if they permit to) and involved in evaluating the decision alternatives in the choice phase. The experts can now use the powerful resources of the cloud to make more possibly complex analyses, answer to questions and express their views on-the-go, wherever 4.4 Mobile Computing 151 they are. Recommender systems (see Sect. 2.3.2) for common users can be effec- tively implemented too. Other advantages offered by mobile cloud computing are (Dinh et al. 2013; Mehta et al. 2013): • Extending the lifetime of the mobile device batteries by offloading the burden of complex computations from the mobile devices with scarce resources to the powerful servers in the cloud. Thus, long execution times on mobile devices are avoided and the power consumption is diminished accordingly. • Virtually and dynamically improving the storage and processing performances of the mobile devices by storing and providing access to large data sets located on cloud servers resulting in a faster execution of complex calculations in cloud. • Improving the reliability and security of operators and data due to complex protection mechanisms and services of authentication, virus and other malicious codes scanning and isolation. • Dynamic, on demand, provisioning of necessary resources. Beside the advantages offered by mobile cloud computing, one can notice several limitations and challenges that are caused by the very nature of the two technologies (cloud computing and mobile computing) combined. The most serious are (Mehta 2013; Filip 2012; Fernando et al. 2013): • scalability, availability, and high running costs of the mobile networks and cloud services; • heterogeneity of the radio access technologies and the corresponding wireless interfaces; • users’ concern about privacy and security issues and their uncomfortable feel- ings caused by the perception being “captive clients” of the cloud computing service providers and sharing their data with unknown people.

4.5 Biometric Technologies for Virtual Electronic Meetings (By I. Buciu)

In Sect. 3.1, we presented the concepts of computer supported e-collaboration and group [decision] support systems (G[D]SS). In virtual electronic meetings, which are meant to support group decision-making activities, when the participants are placed in different remote locations, a fundamental requirement is the security of the access of authorized persons only. More than two decades ago, Dase et al. (1995) noticed that

the dispersed environment requires special security measures in addition to the usual controls, such as limiting attendance, using passwords, and isolating the communication network. In a decentralized environment, individuals may masquerade as either a valid participants or as the system host. There may be some difficulty in limiting attendance. In 152 4 Essential Enabling Technologies

the decision room context, participants have face-to-face interactions, and they receive acknowledgment of their participation in the meeting, either explicitly or implicitly. In distributed synchronous computer conferences, the set of interactive partners is defined but the number of participants is not limited by the size of a decision room. Participants in a decentralized session may not be sure if an individual is present and participating, present and observing, or has left the session since the anonymity issues prevent identification of specific user actions. At present, when the groups are ever larger to include external experts and consultants to contribute to decision activities, such as intelligence and alternative identification and evaluation (see Sect. 2.1.1), possibly by using mobile devices (see Sect. 4.4) and acting asynchronously, the problem of authorized access might become even more serious. Using the services of cloud computing may bring in additional subjects of preoccupation. Fortunately, the modern technology provides solutions: the biometrics (Jain et al. 2006; Ashbourn 2014a, b). In a nutshell, a biometric security system requires a user to provide some biometric features which are then verified against some stored biometric templates. Nowadays, the traditional based authentica- tion method tends to be replaced by or combined with advanced biometric tech- nologies. Biometrics based authentication is becoming increasingly appealing and common for most of the human-computer interaction devices. To give only one recent example, Microsoft augmented its brand new Windows 10 OS version with the capability of supporting face recognition when the user login in. In the sequel, we aim at briefly introducing biometrics related items, including principles, defi- nitions, biometric modalities and technologies along with their advantages, disad- vantages or limitations, and standards. Though the vast majority of the biometric applications are currently related to security and are used extensively for govern- ment and military purposes, their usage in virtual meetings can be envisaged. A preliminary version of the remaining part of this section was published in the International Journal of Computers, Communications and Control by Buciu and Gancsadi (2016).

4.5.1 The Concept

A biometric feature can be defined as a physiological (face, fingerprints, iris, and so on) or behavioural (gait, voice, signature etc.) attribute of a human being that can discriminate one individual from another. Nowadays, the great interest for bio- metric recognition systems can be justified by the increased demand for security. The goal of a biometric based recognition system is either the automatic identifi- cation or verification of identities, given the input data, such as images, speech or videos. Unlike the traditional ways, such as password, biometric traits have some advantages: they cannot be stolen (although spoof attacks may exist to tamper the biometric system), lost or forgotten. However, to be reliable and useful, biometric traits should be unique and persistent over time. Some other criteria should be met, 4.5 Biometric Technologies for Virtual … 153 such as user convenience and acceptability (mainly due to privacy reasons). Biometric recognition is usually performed by extracting a biometric template in the query issued from the input device and comparing it against some enrolled bio- metric templates. The comparison is processed using of the modes bellow (Ashbourn 2014b, p. 65; Kaklauskas et al. 2011; Kaklauskas 2015): • Verification (or authentication), which is stored in the one-to-one process where the query biometric content is compared with the characteristics of the partici- pant to verify his claimed ID and subsequently provide the access. The output is binary, either accept or reject, based on a matching procedure. We should note here that a biometric authentication technology may be used in conjunction with traditional authentication methods, such as password, passports, PIN, smart cards, access tokens and so on, employed as second factor authentication; • Identification (or recognition) is the one-to-many process, where the query is compared against each enrolled biometric template (or multiple templates) from the database, to search for the identity of the query. Identification is a bit more complex than verification as the system serves as both identifier and authenticator; • Adapting the information system function to the detected stress of the user. A biometric based recognition system needs an enrollment procedure which allows the registration of persons in a biometric database that is meant to be later used for identification or verification. In the enrollment process, the biometric features are acquired and stored in the database. The acquired initial data may undergo some pre-processing steps, depending on the biometric modality. For instance, in the case of images, histogram equalization may help when the image suffers from illumination imbalance. For audio data, voice separation from the background may be also a pre-processing step. Biometric features are constructed by a feature extraction step, resulting a biometric template, further stored in the database. Matching is a complex pattern recognition problem between the enrolled samples and the test one. A matching score is computed to reflect the similarity between two biometric templates. The person’s recognition process is challenging because the representation of the same biometrics is basically taken either by dif- ferent sensors or, more often, at two or multiple different time moments, so that the acquisition conditions between the enroll and test samples might greatly vary due to various factors, such as: noise, change in illumination, partial occlusion, different resolution and so on. Consequently, the matching score might be lower than its optimum value. A threshold level is set up next for a final acceptance. A matching score higher than the threshold would give a match and consequently accept of access, while a lower score would lead to rejection. The associated risks for any biometric system are: (a) false accept (when an unauthorized person is wrongly accepted), represented by False Acceptance Rate (FAR), and (b) false reject (when an authorized person is incorrectly denied for access), represented by False Rejection Rate (FRR). An ideal biometric system should have both FAR = FRR = 0. In real life, no such biometric system or 154 4 Essential Enabling Technologies technology exists. While related to the threshold level, the FAR and the FRR show contrary variations. More precisely, a low threshold level would decrease the FRR and increase the FAR. This situation is preferred in applications where the level of security is not critical. However, in applications where a high security level is requested, the threshold is set to a very high value to favor a low FAR in detriment of a high (possibly disturbing) FRR. When the FRR equals the FAR, we have Equal Error Rate (EER), a measure that is often reported when the performance of a biometric system or technology is addressed.

4.5.2 Particular Subclasses

The general class of biometric systems and associated technologies can be partic- ularized into several subclasses in accordance with various criteria, such as: • physiological approaches (which addresses direct measurement from parts of the human body) versus behavioral ones (which typically measures the behavior of the user over time); • cooperative versus non-cooperative; • mono-modal versus multimodal biometric systems; • contact versus touchless versus “at distance” (or remote) technology; • server based versus mobile based biometric technology; • human versus no human, monitoring for data acquisition. The majority of commercial biometric technologies involve physiological measurements which are considered to remain steady over relatively large time interval. Such measurements may include the following modalities (Ashbourn 2014a, pp. 3–14): (a) face recognition, (b) facial thermography, (c) fingerprint recognition, (d) hand geometry based recognition, (d) ear geometry based recog- nition, (e) iris recognition, (f) retina recognition, and (g) vascular pattern recogni- tion. The above biometric modalities have, more or less, reached maturity. The appropriateness of each subclass highly depends on the application type. For instance, a biometric authorization based access for telepresence-based group vir- tual meetings might require either contact or contactless sensing technology, but no human monitoring is typically involved at the sensory remote spot. In the sequel, several technologies which are appropriate for computer mediated virtual meetings meant for group decision–making are reviewed. Figure 4.4 depicts five potential biometric modalities that can be used for user recognition or authentication.

4.5.2.1 Face Recognition

The roots of face identification and verification can be identified in early 1960s, when the computer vision community has started to address the problem. While the 4.5 Biometric Technologies for Virtual … 155

Fig. 4.4 Five biometric modalities that can be used to recognize or authenticate a user face recognition technology is somehow inferior with respect to the technical performance as compared to other more recent biometric technologies (such as iris recognition for example), it is more accepted due to its major advantage: it is the only physiological biometrics that can be reliably measured at distance. Moreover, the authentication of the participants can happen without their explicit interaction with the sensor. The performance of face recognition systems can considerably vary and depends upon the context and other various factors. The modifications of facial features are caused by both long-term and short-term changes. Long-term changes refer to aging where prominent features and wrinkles may show up and perma- nently change the facial texture. In this case, periodic enrollments are necessary to update the biometric template. Short-term changes may refer to weight loss or gain. Other factors affecting the system’s accuracy are partial occlusions (growing beard or moustache, glasses, scarfs and so on) or various environment conditions (dis- tance from camera, varying lighting conditions, noise, motion blur etc.). Another factor is given by the face position. While the enrollment is usually taken in frontal pose, the matching process may suffer from non-frontal pose acquisition. The face recognition systems and technologies are based either on 2D or 3D representation of the face appearance. The 2D-based face biometric systems are pose variant and rely on the information conveyed in the gray level structure of the facial image (2D face texture), while the 3D approaches are pose invariant and involve the volumetric structure of the face along with its depth map. The 3D image acquisition technologies come with different costs and approaches. The most cost-effective solution called stereo acquisition is to use several calibrated 2D cameras to acquire images simultaneously followed by 3D reconstruction. While the acquisition is fast, this approach is highly light sensitive. Changes in illumi- nation can lead to image artifacts compromising the performance of the 3D face recognition system. An alternative is to project a structured light pattern on the facial surface during acquisition. A third solution relies on active sensing where a 156 4 Essential Enabling Technologies laser beam is scanning the face surface generating a reflected facial pattern. Once the facial data are acquired, either as 2D or 3D, an automatic land marking process is necessary to detect facial points of interests such as eyes, eyebrows, mouth contour, nose, chin and so on. These landmarks are further used for face registra- tion, so that facial features (landmarks) are localized at the same position (deter- mined by geometric coordinates) across multiple face images. The process continues with feature extraction and matching. For feature extraction, various techniques were proposed, including subspace methods (principal component analysis, linear discriminant analysis, independent component analysis), filtering approaches (different implementation of Gabor wavelets) or statistical approaches (including vacuous versions of local binary patterns). For the last step, matching, methods starting from distance based classifiers (Euclidean distance, cosine simi- larity measure) up to complex classifiers (neural networks or support vector machines) were implemented. The literature provides a plethora of methods and the interested reader could consult the most recent works on face recognition (Zhou 2006; Tistarelli et al. 2009; Li et al. 2011; Marsico et al. 2014) where all related aspects and techniques are presented in great details.

4.5.2.2 Iris Recognition

Iris recognition is the most reliable type of biometric identification. It is considered to be the ideal biometrics in terms of uniqueness and stability (its features do not vary over time) leading to massive deployment for large-scale systems that proved to be very effective (Daugman 1994). The iris is the colored portion of the eye surrounding the pupil and the biometric system searches for its specific intricate patterns composed of many furrows and ridges. The basic steps are: image acquisition, iris localization using landmark features and segmentation, biometric template generation and biometric template matching. The acquisition factors are: resolution, signal/noise ratio, contrast and illumination wavelength. Once the iris is segmented, it may suffer a pseudo-polar coordinate transformation operation to take into account variations in pupil size. To capture the iris image, the conventional iris recognition technology requires a very short focal length, increasing the intru- siveness of this approach. However, iris recognition systems where the iris image is captured at longer distances is already operative nowadays. While for short focal length the image resolution is not an issue, this becomes very challenging with increasing distance, leading to significant drop in accuracy. One should notice that, regardless of technology, one important aspect with the iris recognition systems is that the approach is not applicable when the user has contact lens. The behavioral biometrics usually does not explicitly ask the user to be coop- erative and thus, seems to be more transparent, user-friendly, less intrusive and more convenient than their physiological counterparts. On the downside, the behavioral biometrics suffers from low level of uniqueness and permanence, compared to the physical biometrics. Moreover, the method accuracy is rather low for authentication and acceptable for verification. The approaches can be split into 4.5 Biometric Technologies for Virtual … 157 the following subclasses: (a) key stroke analysis based authentication, (b) mouse dynamics, (c) speaker recognition, and (d) gait recognition.

4.5.2.3 Keystroke Analysis Based Authentication

Keystroke analysis based authentication is defined as the process of recognizing an individual from his/her typing characteristics. The verification can be performed either static (text-dependent) or dynamic (text-independent) mode. The character- istics are typically composed of the time between successive keystrokes, more precisely the inter-stroke latency, time durations between keystrokes, dwell times (i.e. the time a key is pressed down), overall typing speed, frequency of errors (use of backspace), use of numeric pad and so on. For a large scale application, these characteristics are not unique among too many participants. Therefore, this remark shows they cannot be reliable used as a recognition feature. Instead, they can be suitable for verification systems. Aside from relative low accuracy, the enrollment procedure is the major drawback of such behavioral biometric systems. To generate representative biometric templates, the user might be asked to repeat the enroll procedure by providing a username, password or a specific text for a large number of times. An interesting application is presented in Giot et al. (2009), where the keystroke dynamics based authentication has been analyzed in the context of col- laborative systems. In DISPATCHER system (see Sect. 2.4.2.2, Fig. 2.4) the set of services and models provided are adapted to the perceived emotional state caused by his/her level of interests in the decision problem or knowledge level associated with the urgency of the problem (Filip 1995, 2008).

4.5.2.4 Mouse Dynamics

A behavioral profile can be also constructed using mouse actions performed by a user. Mouse derived features are easy to handle without the user’s knowledge. The mouse authentication involves a registration phase and a login phase. A template is built using the mouse features captured at the time of registration. The same tem- plate is compared with login details which are captured by the mouse task. In case of the laptop, touchpad helps to extract the mouse features. The mouse’s sensitivity affects the performance. The mouse features include general movement, drag and drop, stillness, point and click (single or double) actions (Bours 2012; Shen et al. 2013).

4.5.2.5 Speaker’s Voice Recognition

Speaker recognition is the most researched behavioral biometrics. Although the voice production considers the physical aspects of the mouth, nose and throat, this biometrics is considered as behavioral type, because the pronunciation variations 158 4 Essential Enabling Technologies and the manner of speech are intrinsically behavioral. The specific voice features refer to various analyses, such as amplitude spectrum, localization of spectral peaks related to the vocal tract shape or pitch striations related to the user’s glottal source. Similar to keystroke analysis, the speaker recognition can be performed either in static (text-dependent) or dynamic mode (text-independent) mode (Saquib et al. 2010). In the text-dependent mode, the use is asked by the biometric system to pronounce a particular phrase, while, in the case of the text-independent mode, the user is free to speak any phrase. In the latter case, the verification accuracy usually improves as the text length increases. In between, a pseudo-dynamic mode exists where the user is requested to say two numbers previously enrolled at random in a database. As a principle, the normalized amplitude of the input signal is decom- posed into several band-pass frequency channels with the purpose of feature extraction. The type of the extracted feature may vary. Typical features are the Fourier transform of the voice signal for each channel, along with some extra information consisting in pitch, tone, cadence or shape of the larynx. Amongst behavioral biometric systems, the speaker recognition is the most accurate behav- ioral approach. Nevertheless, the voice might be perturbed by various factors, such as illness, emotional or mental state or even age, conducting to inaccurate results.

4.5.3 Mobile and Web-Based Technologies

In Sect. 4.3, the usage of mobile devices and applications in group decision-making was reviewed. According to (Acuity Market Intelligence 2014), the mobile bio- metrics market will technically explode from $1.6 billion in 2014 to $34.6 billion in 2020. This prediction is forecast for all biometric sensors (modalities) embedded in smart mobile devices (smart phones, tablets, and intelligent wearables. The report claimed that 100 % of smart mobile devices will include embedded biometric sensors as a standard feature by 2020. According to the report, each year more than 800 billion transactions requiring different level of biometric authentication will be processed, while more than 5.5 billion biometric applications are foreseen to be downloaded. Thus, implementing more authentication modalities by replacing conventional ways with biometrics for mobile devices seems to be a reliable solution. Mobile biometric solutions are implemented by device manufacturers, as well as by independent vendors as third-parties, which offer software solutions. For example, Intel Security Division has developed a biometric authentication appli- cation relying either on face, or fingerprint, named TrueKeyTM (Intel 2015), application available for various platforms either server-based, or mobile. Another company, Sensory (2015), released an authentication application named TrueSecureTM that combines voice and vision (face-based) authentication for mobile phones, tablets, and PCs. Mobile technology also incorporates various biometric modalities. Devices with built-in fingerprint sensors exist on the market. One example is TouchID fingerprint technology developed for iPhone 5S by Apple, that incorporates a fingerprint 4.5 Biometric Technologies for Virtual … 159 module. Samsung also came up with fingerprint solution for its Samsung’s Galaxy Tab S model, as well as Samsung Galaxy S6 model. Vision based authentication solutions are allowed by all smartphones which integrate high resolution cameras into their hardware, facilitating third-parties to easily develop such software based authentication options. Other mobile or web-based biometric technology vendors include Applied Recognition (2016) with Ver-ID for various applications.

4.5.4 Possible Attacks

Similar to hacking a conventional authentication modality (password, token, and so on), efforts have been made to hack or break a biometric authentication systems. A potential attack against a biometric system is possible for any component of the system. In the case of virtual electronic meetings, the network distributed (web-based) systems are more vulnerable to the attacks compared to the stand-alone biometric systems. This is due to the fact that, for a stand-alone biometric system, all processes are performed into a single processing unit. On the contrary, for physically disparate biometric systems, the attack may also occur in the transmis- sion path, or in any server performing the authentication. The most common attack is the one against the sensor. When the samples acquisition process is fully automated (i.e. no watching guard exists to monitor the acquisition process) an impostor can easily bypass the system by simply presenting a copy of biometric data of a legitimate user in front of the sensor. The attempt of breaking the biometric system using such method is named spoofing attack. To date, there is no commercial biometric technology that is robust enough against such attacks. The copy may come in various formats, depending on the biometric modality. In the case of facial biometrics, the impostor may present a still image, video sequence playback, or even a 3D silica or rubber mask of the genuine user. The examples presented by Duc and Minh (2009) clearly point out the weaknesses of such systems and emphasize the necessity of incorporating reliable anti-spoofing mechanisms into the biometric systems. Hence, not surprisingly, many research works were devoted to find robust solutions for detecting spoofing attacks. The spoof detection approaches may fall into four categories: (a) challenge response based methods requiring user interaction, (b) behavioral involuntary movement detection for parts of the face and head, (c) data-driven characterization, and (d) presence of special anti-spoofing devices. Google (2013) patented a blinking based anti-spoofing mechanism. Spoofing a real iris with a good quality image is also possible. Gupta et al. (2014) used a commercial software development kit, VeriEye, and successfully spoofed the system with printed images of iris. Voice impersonation can also be applied to trick both automated and human verification for voice authentication systems (Mukhopadhyay et al. 2015). A legitimate participant’s voice can be recorded in various ways, including close 160 4 Essential Enabling Technologies proximity between the attacker and user, throughout a spam call or searching for audio-video recordings over the Internet. With the help of a voice morphing pro- gram, the attacker may synthesize the user’s voice by using just a few samples.

4.5.5 Attributes of Effective Technologies

In order to be practically usable, the biometric systems and technologies are expected to possess several characteristics as follows (Jain et al. 2006,p.4) • Universality, that is the ability for a specific biometric system to be applied to a whole population of users. This is directly connected to Failure to Enroll (FTE) condition that refers to the case when a part of the population may not be enrolled for whatever reason. On particular reason for a FTA error is the lack of enrolment of an individual who, consequently, does not have the required biometrics; • Uniqueness, that is the ability to successfully discriminate people. The biometric features must be as distinct as possible from one individual to another; • Persistency, that is the ability of biometric features not to change over time. Some features do not change (iris, fingerprint patterns, vascular system, and so on.) while others do (facial features); • Collectability, that is the ability of the system to perform the acquisition for any occasion, regardless of environment change, such as change in illumination; • Simplicity of recording and transmission, which should be easy to use and not error-prone; • Cost-efficiency of the whole process; • Acceptability by the participants to group virtual meetings. Typically, highly invasive biometric technologies (such as retina based systems) are less accept- able than those using non-invasive approaches (such as vision based or touch- less sensors). Another aspect to be considered is the access to privacy; • Scalability, that refers to the ability of the system to accommodate a large number of enrollment individuals while providing a reasonable accuracy. The degree of required scalability is application dependent. In the case a network distributed system (such as bank application or large group decision systems) the number of enrolled individuals might easily reach high figures and the system should cope with this overwhelming data. For such a large-scale bio- metric application its performances (accuracy, FRR and FAR) are critical. More exactly, one false rejection per month might be acceptable, but hundreds false rejection during one virtual electronic meeting would be disastrous; • Resilience, that is the ability of the system to handle exceptions. An example would be an individual whose biometric features might not be easily acquired. If 4.5 Biometric Technologies for Virtual … 161

a user has a broken arm, he may need human intervention to use a hand or fingerprint based biometric system; • Circumventability, that is the ability of the system to detect attacks. An important role has the sensor that should be tamper-proof.

4.5.6 Standards

As noted throughout the section, a network distributed biometric technology involves several components, including the sensor, the communication channel, the web-based server, components that rely on different hardware solutions. Not only the hardware is different, but also the integrated software is different for each hardware configuration. Moreover, a fully operational system working on a specific operating system might be not compatible with another operating system. Another situation is caused by the multi-modal biometric systems relying on two or multiple biometric modalities (face, fingerprint, voice, as an example) that yield individual scores which are finally fused to output a single matching score. Without a common format that has to be shared among these modalities, the multi-modal biometric system cannot operate. Finding mechanisms for each component of the biometric system to communicate has led to the necessity of standardization. There are several working groups concerning biometric standards. At the international level, the International Standard Organization (ISO) and International Electrotechnical Commission (IEC) play a significant role. ISO and IEC have established a Joint Technical Committee 1 JTC 1/SC 37 (ISO 2015), to ensure a high priority, focused, and comprehensive approach worldwide for the rapid development and approval of formal international biometric standards. There are several aspects to consider, including Data Interchange Formats, Data Structure Standard and Technical Interface Standards. Data Interchange Format represents the lowest level of interoperability between systems using the same modality and addresses the actual representation of the biometric data itself. Data Structure Standard addresses the transmission of formatted data by providing the necessary wrapper around the biometric data within the so called Common Biometric Exchange File Format,to facilitate interoperability between different systems or system components, forward compatibility for technology improvement, and software hardware integration. Data Interchange Format Standards provide the mechanism for extraction, matching and decision modules of the biometric system. Technical Interface Standards provide an Application Programming Interface (API) by defining the format for the Biometric Information Record, so that components can understand and interpret records. A representative standard is BioAPI that defines a framework for installing the components, making them compliant with plug-and-play concept. BioAPI tries to hide as much as of unique attributes of individual biometric technologies, vendor implementations, products and devices. A Biometric Service Provider could then plug the components throughout a Service Provider Interface. 162 4 Essential Enabling Technologies

4.6 Game Technology as a Tool for Collaborative Decision-Making (By Ioana Andreea Ștefan)

As described in Sect. 3.1, to accommodate teamwork, today’s organizational landscapes integrate several Information and Communication Technologies (I&CT) that aim at enhancing efficiency of workflows and performance of joint efforts. A variety of tools are used to support both face-to-face and virtual multidisciplinary teams engaged in routine or more complex, strategic decision making (Becera-Fernandez et al. 2010; Scott and Timmerman 2013). These tools can be of crucial importance for key decision-making activities and project implementations. However, organizations need to make sure not only that they select the right tool for the right job, but also that their employees have the required skills to operate efficiently such tools and that they have the expertise expected to complete a specific decision-making task. In this context, Digital Games (DG), Multi-User Virtual Environments, Massive Multiplayer Online Games, and virtual worlds (Riterfeld et al. 2009; Bredl and Boshe 2013; Stanescu et al. 2015a, b) have emerged as effective tools able to motivate skill building, knowledge sharing, communication and engagement (Connolly et al. 2012; Wouters et al. 2013). In contrast with Entertainment games, Serious games are defined as

the application of gaming technology, process, and design to the solution of problems faced by businesses and other organizations. Serious games promote the transfer and cross fer- tilization of game development knowledge and techniques in traditionally non-game markets such as training, product design, sales, marketing, and so on. (Susi 2007) In recent years, DGs and gamification have gained momentum as tools that are able not only to convey hard skills, such as understanding of how complex systems operate, but also mediate soft skills like collaboration and communication in diverse business, social, and cultural contexts (Duin et al. 2011). DGs have also been used as stimuli for creativity and innovation, with a significant uptake in co-design, co-creation, and co-evolution processes (Walsh 2012; Jennings 2016). Hauge et al. (2008) investigate the ways of using DG and simulation tools to increase the understanding of the behaviour of production networks which are organised as virtual organisations (see Sect. 1.1.3) and reviews the existing software products in the field. In this section, we explore the way in which DGs can be used to support skill development and collaborative decision-making. The section contains a selection of core game mechanics that have the potential to support collaborative work that can be applied both in DGs and also to gamify collaborative decision support appli- cations. A list of seventeen collaborative DGs and of their key characteristics is presented. To exemplify the approach, a case study on a collaborative decision-making DG has been carried out. 4.6 Game Technology as a Tool for Collaborative … 163

4.6.1 The Game Mechanics

Games are play activities, conducted in virtual spaces that mimic reality. The players try to achieve their goals by acting in accordance with specific rules. Game mechanics are the rules, processes, and data that define how the play progresses, what happens when, and what conditions determine victory or defeat (Adams and Dormans 2012). At the core of these applications are the game mechanics. Sicart (2008)defines game mechanics as the methods invoked by agents for interacting with the game world. They can be used to create an environment that facilitates and enhances interpersonal relationships and socialisations, resulting into more pro- ductive collaborations (Pallot et al. 2012). Moreover, mistakes are allowed in DGs and they are considered as good ways to gain knowledge and experience. Building upon (Lim et al. 2013), the following game mechanics have been identified as relevant for collaborative decision-making games: • Rewards represent the feedback a player receives for a worthy action. They are used to motivate the player to progress in the game. In collaborative environ- ments, rewards are collected to strengthen the team and to differentiate teams. • Collecting plays a significant role in collaborative work, as collected compe- tencies of team members can bring supplementary benefits for the team. • The Protégé Effect represents the player’s tendency to work harder for their avatar or alter ego than for themselves. The use of this mechanics can have significant benefits for collaborative work, as it can motivate the player, through his/her avatar, to perform better in the game. This mechanic is usually employed in games that have several concurrent transactions and the challenge involves making the best decision based on the given resources and time constraints. • Tokens are used to introduce elements of surprise within the game. They can be used as incentives for layers to perform better within their team or to stimulate collaborations within game teams. • Cascading information and Story are used to provide the necessary level of understanding at each point during a DG. This is particular important for col- laborative games, as players need to clearly understand their role and their contribution within the team. • Questions and Answers are used within the gaming environment as a basic, yet effective means of interacting and engaging with the player to facilitate learning and knowledge sharing at team level. • Role Playing establishes the effectiveness of actions within the game, depending on how well the player assumes and develops their role as a virtual character. In collaborative games it is important that player can assume various roles within teams, in order to develop various sets of skills. • Capture/Eliminate enables teams to work together to reach a common goal: collect a number of point to get to the next level, eliminate a competitor, and so on. • Quick feedback on the progression of individual players gives them instant gratification and enable teams to better their performance. 164 4 Essential Enabling Technologies

• Action Points are used to control what a player may do during their turn in the game by allocating them a budget of “action points”. Team players can cumulate their action points to achieve certain goals in collaborative games. • Appointment is a mechanics in which to succeed a player must return at a predefined time to take a predetermined action. It can be used to make team members responsible for their actions and make them understand the potential negative effects their non-action can have on the entire team. • Communal Discovery involves an entire community working together to solve a problem. It provides opportunities to shape players’ behaviours in collaborative work, as well as to enhance their communication and social skills. • Meta-game mechanics are rewards or improvements that can be earned during the actual game-play and/or outside of it. Just like in the case of action points mechanics, these can be used at team level to reward team efforts instead of solitary ones. • Ownership is used to create loyalty of the gaming pool. This mechanics enables game team to gain personality and motivate team members to work hard for their team. • Cooperation, Collaboration. In cooperative games, the mechanics require players to work with one another, but their goals are different, and not all players are guaranteed to benefit equally. In collaborative games, players share common goals and outcomes; players either win or lose together. Cooperative games exist on the spectrum between competitive and collaborative games, where gamers are rewarded for group-oriented strategies only when it is in their own self-interest. The set of game mechanics presented above represents but a fraction of the wide spectrum of gamifng mechanics that could potentially be applied for collaborative decision-making processes. While some are already incorporated in collaborative games, many of these mechanics were not specifically developed to support deci- sion process, but were adapted from entertainment and casual games.

4.6.2 Software Tools

There are plenty of digital games on the market. The characteristics of several games are compared in Table 4.3, starting from the work of Baalsrud Hauge et al. (2012). Most of games have been developed to enhance decision making skills, but also to better knowledge sharing at team level and understand business processes. The assimilation and acceptance of DGs, and the gamification of existing col- laborative decision support tools are able to address significant challenges and open new opportunities for enhanced collaborative decision-making. Even if DGs have been proven as efficient ways to transmit knowledge and to socialise, there are still difficulties in integrating gaming within organisations because of the existing . aeTcnlg saTo o Collaborative for Tool a as Technology Game 4.6

Table 4.3 A comparison of digital games used for collaborative decision making

Digital COSIGA GLOTRAIN LOGTRAIN PRIME SHARE SPIKO SUPPLY REFQUEST Supply EIS Beer TAC-Supply MARGA Top sim Marga Delta Shortfall game/characteristics NET chain game chain Industry logistics service design GAME game management game Distributed work X X – XX X– XX– XX X X X – X Knowledge (X) (X) X – X – XX – XX X X X –– management and sharing Product (X) X – XX –– – – ––– – – –X – manufacturing Innovation/product XX ––X –– X – X –– – – – XX development skills Decision/project XXXX– X –– XX– XX– X ––

management skills … Supply chain –– XX– XX – X – XX X X X –– management and logistics skills Risk management –––––X –– –––– X – X –– skills Target group: EEEE/DEED/EEEEEEDEDEE decision makers/engineering students 165 166 4 Essential Enabling Technologies misconceptions about games. DGs are often perceived more as entertaining activ- ities and not as work related activities.

4.7 Notes and Comments

In 2011, at the Gartner Symposium IT and expo, David Cearley, the Gartner vice-president, drew the attention of decision makers in the IT sector over several technologies and IT products and their impact and advised them to be prepared (Cooney 2011). Among the ten technology issues, the most relevant for the subject of this book are: • The end of the era of PC associated with Windows dominance era and their replacement with a post PC era, where Windows would be one of the supported environment on a market which is expected to be dominated up to 80 % by IOS of Apple and Android of Google; • The dynamic evolution of mobile centric applications and interfaces; • The rise of application stores and marketplaces; • The thorough usage of [data] analytics and simulation in any activity including the enabling and tracing collaborative decision-making; • Big Data enabled by new techniques to handle huge datasets; • Cloud computing and development by hybrid private/public cloud apps. The chapter addressed the above technologies. We tried to describe the essential aspects of several information and communication technologies and highlight their potential usages together with reported results in collaborative decision-making activities, The main ideas to be retained by the reader are the following: • Business Intelligence and Analytics led to a new science and an associated class of jobs, namely Data Science and data scientist, respectively; • Web 4.0 will be the fourth generation of the Web, but it is still an idea in progress at this time; • The Web is moving toward using neural networks, genetic algorithms and other artificial intelligence concepts, in order to become an intelligent web; • The web-based decision support systems can offer timely, user-friendly and secure distribution of information for business development; • Social networks have created a new world of communication, cooperation and collaboration, which have changed the way in which people communicate and collaborate; • Social networks are very important in the collaborative decision support sys- tems, because ever more people need to interact and exchange opinions in order to take correct decisions; 4.7 Notes and Comments 167

• The social networking in the on-line environment takes place in the context of full trust between participants, because members of social networks reveal sensitive information that can be exploited; • In the context of decision-making process, mobile technologies gain new meanings, because all the situations that were possible in a classical format, using desktop technologies, are changing now due to this kind of usage migration; • Nowadays, mobile devices differ also in terms of screen sizes, screen resolu- tions, type of processors and available storage. We take into consideration mobile devices, such as smart-phones, tablets, PDAs, and in the last period is used the “phablet” concept as a smart-phone with a larger screen; • New technologies may request various biometric solutions, such as fingerprint recognition, face recognition, iris detection to prevent unauthorized access; • Serious digital games can play a significant and convenient role in training the people for real decision situations. The next chapter of the book will bring in more practical technical details on computer supported collaborative works and decision making process. It will contain the presentation of a model of cooperative decision-making, followed by a practical example of applying data mining in making decisions in labor market, and the iDecisionSupport, a practical I&CT platform that supports activities on the decision support domain. Space limitations prevented us from giving more details about the technologies addressed in the chapter but essential aspects which were judged sufficiently rele- vant for the subject of the book. The interested reader can find additional infor- mation in the books of Sauter (2014) for business intelligence, Domingue et al. (2011) for web technology, Kadushin (2012) for social networks, Stüber (2011) and Agrawal and Zeng (2015) for mobile systems, Furht and Escalante (2010) and Erl et al. (2013) for cloud computing, Kaklauskas (2015) for biometric technologies. The potential dangers of a “Big Brother syndrome” caused by the new advanced technologies should not be neglected. They are reviewed by Power (2016).

References

Acuity Market Intelligence (2014) The global biometrics and mobility report: The Convergence of Commerce and Privacy Market Analysis and Forecasts 2014 to 2020 (available at: http://www. acuity-mi.com/GBMR_Report.php, accessed 28 Dec. 2015). Adams E, Dormans J (2012) Game Mechanics: Advanced . New Riders, Berkeley. Aghaei S, Nematbakhsh M A, Farsani H K (2012). Evolution of the World Wide Web: from Web 1.0 to Web 4.0, International Journal of Web & Semantic Technology (IJWesT), 3(1): 1–10. Agrawal D, Zeng QA (2015) Introduction to Wireless and mobile systems.4th edition, Cengage Learning. Boston, MA. Alexa (2016). Alexa Top 500 Sites, available at http://www.alexa.com, accessed on 12.02.2016. Alter S (1980) Decision Support Systems: Current Practice and Continuing Challenges. Addison Wesley, Reading, MS. 168 4 Essential Enabling Technologies

Alter S. (1977) A taxonomy of decision support systems. Sloan Management Review, 19(1): 39– 56. Applied Recognition Inc. (2016) Ver-ID (available at: http://appliedrec.com/products/ver-id/, accessed on 28 December, 2015). Ashbourn J (2014a) Biometrics in the New World: The Cloud, Mobile Technology and Pervasive Identity. Springer Science & Business Media. Ashbourn J (2014b). Biometrics: Advanced Identity Verification: the Complete Guide. Springer. London. Atzori L, Iera A, Morabito G (2010). The internet of things: A survey. Computer networks, 54(15), 2787–2805. Baalsrud Hauge J, Hoeborn G, Bredtmann, J (2012) Challenges of Serious games for improving students’ management skills on decision making. In: Handbook of Research on Serious Games as Educational, Business and Research Tools: Development and Design. Cruz-Cunha M M (ed.), IGI Global: p. 947–964. Bange C, Grossner T S, Janoschek N (2013) Big Data Survey Europe: Usage Technology and Budgets in European Best Practice Companies. BARC Institute, Würzburg. Becerra-Fernandez I, Del Alto M, Stewart H (2010) The launch of web-based collaborative decision support at NASA. In: Interdisciplinary Perspectives on E-Collaboration: Emerging Trends and Applications, Kock, N. (ed.): IGI Global, Hershey: p 210–230. Bekkering E, Shim J P (2006) i2i trust in videoconferencing. Communications of ACM, 49(7): 103–107. Benedict K (2012) Using mobile technology to drive decision making, ClickBlog, (available at: http://blogs.clicksoftware.com/index/using-mobile-technology-to-drive-decision-making/, accessed 05.01.2016). Berners-Lee T (1989) Information Management: A Proposal, World Wide Web Consortium, (available at: https://www.w3.org/History/1989/proposal.html, accessed on 12.02.2016). Berners-Lee T (1999) Transcript of Tim Berners-Lee’s talk to the LCS 35th Anniversary celebrations. Cambridge Massachusetts (available at: http://www.w3.org/1999/04/13-tbl.html accessed on 10.07.2016). Berners-Lee T, Hendler J, Lissila O (2001) The Semantic Web. [Online]. (available at: http://www. scientificamerican.com/article/the-semantic-web/ accessed on 05.01.2016). Bharati P, Chaudhury A (2004) An Empirical Investigation of Decision-making Satisfaction in Web-based Decision Support Systems, Decision Support Systems, 37(2): 187–197. Bhargava H K, Power D J, Sun D (2007) Progress in web-based decision support technologies, Decision Support Systems, 43:1083–1095. Billström O, Cederquist L, Ewerberg M, Sandgren G, Uddenfeldt J (2006) 50 Years with mobile phones. Ericsson Review,3. Bours P (2012) Continuous keystroke dynamics: A different perspective towards biometric evaluation, Information Security Technical Report, Vol. 17(1–2): 36–43. Brandas C, Didraga O (2014) Collaborative decision support systems: Cloud, mobile and social approaches. In: Proceedings of the 13th International Conference on Informatics in Economy, IE 2014, May 15–18, 2014, Bucharest, Romania, ASE Printing House: p. 2247-1480. Brandt F, Chabin G, Geist C (2015) Pnyx: A Powerful and user-friendly tool for preference aggregation. In: Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (AAMAS ‘15), May 4–8, 2015, Istanbul, Turkey, International Foundation for Autonomous Agents and Multiagent Systems, Richland: p. 1915–1916. Bredl K, Boshe W (2013) Serious Games and Virtual Worlds in Education, Professional Development, and Healthcare. IGI Global, Hershey. Brynjolfsson E, Hitt L M, Kim H M (2011) Strength in Numbers. How Does Data-driven Decision-making Affect Firm Performance? SSRN Working paper (available at http://ssrn. com/abstract=1819486, accessed on 03.10.2015). Buciu I, Gacsadi A (2016) Biometrics Systems and Technologies: A Survey. International Journal of Computers Communications & Control, 11 (3): 313–328. References 169

Bughin J, Chui M, Manyika J (2010) Clouds, big data and smart assets: Ten tech.-enabled business trends to watch, McKinsey Quarterly. May. Bulmer D, DiMauro V (2010) The New Symbiosis of Professional Networks: Social Media’s Impact on Business and Decision-Making. The Society for New Communications Research, (available at http://sncr.org/wp-content/uploads/2010/02/NewSymbiosisReportExecSumm.pdf, accessed on 12.02.2016). Camarinha-Matos L M, Macedo P (2010) A conceptual model of value systems in collaborative networks. Journal of Intelligent Manufacturing, 21(3): 287–299. Cederholm D (2009). Web Standards Solutions: The Markup and Style Handbook, Special Edition, Friends of ED, Apress. Chang Y-J, Li H-H, Chou L-D, Shin H-Y (2007) A general architecture of mobile social network services. In: Proceedings of the International Conference on Convergence Information Technology. IEEE Computer Society: p. 151–156. Chapman P, Clinton J, Kerber R, Khabaza T, Reinartz T, Shearer C, Wirth R (2000) CRISP-DM 1.0. Step-by Step data Mining Guide (available at: http://the-modeling-agency.com/crisp-dm. pdf, accessed on 03.10.2015). Chatzimilioukis G, Konstantinidis A, Laudias C, Zeinalipour-Yaztti D (2012) Crowdsourcing with smartphones. Internet Computing IEEE, 16(5): 36–44 (available at: http://smartlab.cs.ucy.ac.cy, accessed on 20.02.2016). Chen C L P, Zhang C-Y (2014) Data-intensive Applications, Challenges, Techniques and Technologies: A Survey for Big Data. Information Sciences, 275: 314–347. Chen H, Chiang R H, Storey V C (2012) Business intelligence and analytics: From Big Data to Big Impact. MIS quarterly, 36(4) December: 1–24, 1165–1188. Chen M, Mao, S., Liu, Y (2014) Big Data: a Survey. Mobile New Appl., 19: 171–209. Cheung C MK, Chiu P Y, Lee MK O (2011) Online social networks: Why do students use Facebook? Computers in Human Behavior, 27(4): 1337–1343. Chi H, Lai Y (2015) The Influence of 4G mobile value added services on Usage Intention: The Mediating Effect of Perceived Value of Smartphone and Phablet. International Journal of Marketing Studies, 7(6):50–59. Chilamkurti N, Zeadally S, Chaouchi N eds (2013) Next Generations Wireless. 4G and Beyond Technology. Springer Science and Business, London. Ciurea C (2010) The development of a mobile application in a collaborative banking System. Informatica Economică, 14(3): 86–97. Ciurea C, Pocatilu P (2012) Designing m-learning applications for collaborative virtual environments, International Journal of Education and Information Technologies, 6 (1): 149–156. Ciurea C, Tudorache C (2014) New perspectives on the development of virtual exhibitions for mobile devices, Economy Informatics, 14(1): 31–38. Connolly T M, Boyle E A, MacArthur E, Hainey T, Boyle J M (2012) A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59(2): 661–686. Cooney M (2011) Gartner: The Top 10 Strategic Technology Trends for 2012. Network World, October 18. Corazza G C (1998) Marconi’s life. In: Proceedings of IEEE 86(7): 1307–1311. Cukier K (2010) Data, Data Everywhere: A Special Report on Managing Information, Economist, February, 28 (www.economist.com/node/15557443, accessed on 26.10.2015). Dahmke M (1983) The Compaq computer. A portable and affordable alternative to the IBM personal computer, BYTE:33–36. Dahr V, Stein R (1997) Seven Methods for Transforming Corporate Data into Business Intelligence. Prentice Hall, Englewood Cliffs, N. J. Dase M A, Tung L, Turban E (1995) A proposed research framework for distributed group support systems. In System Sciences, 1995. Proceedings of the Twenty-Eighth Hawaii International Conference on Vol. 4: p. 42–51, IEEE. 170 4 Essential Enabling Technologies

Daugman, J (1994) Biometric personal identification system based on Iris Recognition, U.S. Patent 5,291,560. Davenport T H, Patil D J (2012) Data Scientist: The Sexiest Job of the 21st Century. Harvard Business Review, October 90: 70–76. Davenport T H (2006) Competing on Analytics. Harvard Business Review, 84 (1), January: 98– 107. Dekleva S, Shim JP, Varshney V, Knoerzer G (2007) Evolution and emerging issues in Mobile Wireless Networks. Communications of the ACM 50(6): 38–43. Diebold F (2012) On the Origin(s) and Development of the Term “Big Data”. Pier working paper archive. Penn Institute for Economic Research, Dept. of Economics, University of Pennsylvania. Dinh, H T, Lee C, Niyato D, Wang P (2013) A Survey of mobile cloud computing concepts: Architecture, applications and approaches. Wireless Communications and Mobile Computing, 13: 1587–1611. Domingue J, Fensel D, Hendler J A eds. (2011) Handbook of Semantic web Technologies. Springer Science & Business Media, Berlin. Doreian P, Stokman F (2013). Evolution of Social Networks, Routledge, Social Science. Duc N M, Minh B Q (2009) Your face is NOT your password; authentication bypassing lenovo– asus–toshiba. Black Hat Conference. Duin H, Baalsrud Hauge J, Hunecker F, Thoben K D (2011) Application of Serious Games in Industrial Contexts. In Business, Technological, and Social Dimensions of Computer Games, Varvalho V H, Cruz-Cunha M, Tavares P eds. IGI Globalm Hershey: p 331–347. Durkheim E (1893) De la division du travail social: étude sur l’organisation des sociétés supérieures, Paris: F. Alcan, translated in 1964, by Lewis A. Coser as The Division of Labor in Society, New York: Free Press. Ergu D, Kou G, Peng Y, Shi, Y, Shi Y (2013) The analytic hierarchy process: task scheduling and resource allocation in cloud computing environment. The Journal of Supercomputing, 64(3): 835–848. Erl T, Puttini R, Mahmood Z (2013) Cloud computing: concepts, technology, & architecture. Pearson Education. Fan W, Bifet A (2013) Mining Big Data: Current Status and Forecast for the Future. SIGKDD Explorations, 14(2): 1–5. Fernando N, Loke S W, Rayahu W (2013) Mobile cloud computing. Next Generation Computer Systems, 29: 84–106. Fichter D (2005) The many forms of e-collaboration: Blogs, wikis, portals, groupware, discussion boards, and instant messaging. Online, 29(4): 48–50. Filip F G (1995) Towards more humanized real-time decision support systems. In: L. Camarinha-Matos L, Afsarmanesh H, eds, Balanced Automation Systems: Architectures and Design Methods (Chapman & Hall, London: p. 230–240. Filip F G (2008) Decision support and control for large-scale complex systems. Annual Reviews in Control, 32(1): 61–70. Filip F G, Herrera-Viedma E (2014) Big Data in the European Union. The Bridge, 44 (4): 34–37. Filip F G (2012) A decision-making perspective for designing and building information systems. International Journal of Computers Communications & Control, 7(2): 264–272. Fortune (2016) Pope says Internet, social networks are ‘gift of God’ if used wisely. Reuters,22 January. Foster I, Zhao Y, Raicu I, Lu S (2008) Cloud computing and grid computing 360-degree compared. In: Proceedings of Workshop on Grid Computing Environments (GCE’08):1–10. Fraternali P, Rossi G, Sánchez-Figueroa F (2010) Rich internet applications. IEEE Internet Computing, 14(3): 9–12. Freeman L C (1978) Centrality in social networks conceptual clarification. Social Networks, 1(3): 215–239. Furht B, Escalante A (2010) Handbook of cloud computing. New York: Springer. References 171

Ghais A, Berzins G, Wright D (1987) INMARSAT and the future of mobile satellite services. Selected Areas in Communications, IEEE Journal on, 5(4): 592–600. Giot R, El-Abed M, Rosenberger C (2009) Keystroke dynamics authentication for collaborative systems, Intl. Symposium on Collaborative Technologies and Systems: p. 172–-179, 2009. Goadrich M H, Rogers M P (2011) Smartphone development: iOS versus Android. In: Proceedings of the 42nd ACM technical symposium on Computer Science Education:— SIGCSE’11, March 9–12, 2011, Dallas, Texas, USA: p. 607–612. Google (2013) Method and system for spoof detection for biometric authentication (available at: http://www.google.com/patents/US8437513, accessed on 28 December, 2015. Gray P, Johansen, Nunamaker J, Rodman J, Wagner G R (2009) GDSS past, present and future. In: Schaff D, Paradice D, Burstein F, Power D J, Sharda R eds (2009). Decision Support. An Examination of the DSS Discipline. Springer Science and Business Media, New York: 1–24. Gruber T (2008) Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web, 6(1): 4–13. Gubbi J, Buyya R, Marusic S, Palaniswami M (2013) Internet of Things (IoT): A vision, architectural elements, and future directions. Future Generation Computer Systems, 29(7), 1645–1660. Gupta P, Behera S, Vatsa M, Singh R (2014) On iris spoofing using print attack. In Proceedings of the 2014 22nd International Conference on Pattern Recognition: p. 1681–1686. Hagerty J, Sallam R L, Richardson, J. (2012) Magic Quadrant for Business Intelligence Platforms. Gartner Inc, Standford, CT. Hanzo L, Haas H, Imre S, O’Brien D, Rupp M, Gyongyosi L (2012) Wireless myths, realities and futures: From 3G/4G to optical and quantum wireless. In: Proceedings of IEEE, 100: p. 1853– 1888. Hauge J B, Duin H, Hunecker F (2008) Application Areas for Serious Games in Virtual Organisations in Manufacturing (available at: http://games.biba.uni-bremen.de/info/2008-12. pdf, accessed on 16.07.2016). Hirokawa R Y, Poole M S (1986) Communication and Group-Decision Making, SAGE Publications. Hu H, Wen Y. Chua T-S, Li X (2014) Toward Scalable Systems for Big Data Analytics: A Technology Tutorial. IEEE Access: 652–687. Hyrynsalmi S, Makila T, Järvi A, Suominen A, Seppönen M, Knuutila T (2012) AppStore marketplace, play! An analysis of multi-homing. In: Mobile Software Ecosystems. Jansen Slinger: 55–72. Inmon W H (1981) Database Machines and Decision Support Systems. QED Technical Publishing Group, Westport. Intel (2015) True Key (available at: https://www.truekey.com/, accessed on 28 December, 2015.). ISO (2015) ISO/IEC JTC 1/SC 37 - Biometrics (available at: http://www.iso.org/iso/home/store/ catalogue_tc/catalogue_tc_browse.htm?commid=313770&published=on, accessed on 28.12. 2015). Ivan I, Ciurea C, Zamfiroiu A (2014) Metrics of collaborative business systems in the knowledge-based economy. Proceedings of 2nd International Conference on Information Technology and Quantitative Management, ITQM 2014, June 3–5, 2014, Moscow, Russia, Procedia Computer Science, 31: p. 379–388. Jain A, Bolle, R, Pankanti S eds (2006) Biometrics: personal identification in networked society (Vol. 479). Springer Science & Business Media. Jennings S C (2016) Co-Creation and the Distributed Authorship of Video Games. In Examining the Evolution of Gaming and Its Impact on Social, Cultural, and Political Perspectives, Jensen L J, Valentine K D eds, IGI Global, Hershey: p 123–146. Kadushin C (2012) Understanding Social Networks: Theories, Concepts, and Findings. Oxford University Press, Oxford. Kaisler S, Armour F, Espinosa J A, Money W (2013) Big Data: Issues and Challenges Moving Forward. In: 46th Hawaii International Conference in System Sciences, IEEE Computer Society: p. 995–104. 172 4 Essential Enabling Technologies

Kaklauskas A (2015) Biometric and Intelligent Decision Making Support. Springer. Kaklauskas A, Zavadskas E K, Pruskus V, Vlasenko A, Bartkiene L, Paliskiene R, Zemeckyte L, Gerstein V, Dzemyda G, Tamulevicius G (2011) Recommended biometric stress management system. Expert Systems with Applications, 38(11): 14011–14025. Kaplan A M, Haenlein M (2010) Users of the world, unite! The challenges and opportunities of Social Media. Business Horizons, 53(1): 59–68. Kelley P G, Cranor LF, Sadeh N (2013) Privacy as part of the app decision-making process. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM: p. 3393–3402. Kijkuit B, van den Ende J (2007) The organizational life of an idea: integrating social network, Creativity and decision-making perspectives, Journal of Management Studies, 44(6): 863–882. Koch M, Schwabe G, Briggs R O (2015) CSCW and social computing, Bus Inf Syst Eng, DOI 10.1007/s12599-015-0376-2. Kück G (2004) Tim Berners-Lee’s Semantic Web. South African Journal of Information Management, 6(1), March 2004. Kumar R, Novak J, Tomkins A (2010) Structure and evolution of online social networks. Link Mining: Models, Algorithms, and Applications: p. 337–357. Laney D (2001). 3-D Data Management: Controlling Data Volume, Velocity, Variety. META Group Research Note, February 6. Lee J S, Su Y W, Shen C C (2007) A comparative study of wireless protocols: Bluetooth, UWB, ZigBee, and Wi-Fi. In: Industrial Electronics Society, 2007. IECON 2007. 33rd Annual Conference of the IEEE:p.46–51). Li, S.Z., Jain, A., eds. (2011). Handbook of Face Recognition, Springer, London. Lim T, Louchart, S, Suttie, N, Ritchie J M, Aylett R S, Stănescu IA, Roceanu I, Martinez Ortiz I, Moreno-Ger P. (2013) Strategies for effective digital games development and implementation. In: Cases on Digital Game-Based Learning: Methods, Models, and Strategies, Whitton N ed. IGI Global: p. 168–198. Looney C A, Jesup L M, Valacich J S (2004). Emerging business models for mobile brokerage services. Communicatios of ACM 45(8): 75–81. Madden S. (2012) From Databases to Big Data, IEEE Internet Computing, May-June: 4–6. Maine, K, Devieux C, Swan P (1995) Overview of IRIDIUM satellite network. In: WESCON/’95. Conference Record. Microelectronics Communications Technology Producing Quality Products Mobile and Portable Power Emerging Technologies, IEEE: 483. Manyika J, Chui M, Brown B, Bughin J, Dobbs R, Roseburg C, Buyers A H (2011) Big Data: The Next Frontier for Innovation, Competition and Productivity. McKinsey Global Institute. Marklein T, Delahaye Paine K, Bagnall R (2011) Moving towards global standards for social media measurement. The 3rd European Summit on Measurement,08–10 June, Lisbon, (available at: http://amecorg.com/wp-content/uploads/2011/10/Workshop-G.pdf, accessed on 02.02.2016). Marsico M D, Nappi M, Tistarelli M (eds) (2014) Face Recognition in Adverse Conditions, IGI-Global. Mehta M, Ajmera E, Jondhale R (2013) Mobile cloud computing. International Journal of Electronics and Communication Engineering & Technology (IJECT), 4(5): 152–160. Meier R (2012) Professional Android 4 Application Development. John Wiley & Sons, Indianapolis, Indiana. Mukhopadhyay D, Shirvanian M, Saxena N (2015) All your voices are belong to us: stealing voices to fool humans and machines. In: Computer Security–ESORICS. Springer International Publishing: p. 599–621. Murphy G D (2011) Post-PC devices. A summary of early iPad technology adoption in tertiary environments. E-Journal of Business Education & Scholarship of Teaching, 5(1): 18–32. Narasimhan B, Nichols R (2011) State of cloud applications and platforms: The cloud adopters’ view. Computer, 44 (3): 24–28. Negash S, Gray P (2008) Business intelligence. In: Burstein F, Holsapple CW eds Handbook of Decision Support Systems 2, Springer Berlin Heidelberg: p. 175–193. References 173

Newcomer E (2002) Understanding Web Services: XML, Wsdl, Soap, and UDDI, Addison-Wesley Professional. Nof S Y (2007) Collaborative control theory for e-Work, e-Production, and e-Service, Annual Reviews in Control, 31(2): 281–292. Nylund A I (1999) Tracing the BI Timely Tree. Knowledge Management, 60, July. O’Reilly T (2005) What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software (available at: http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what- is-web-20.html accessed on 23.06.2016). Padgham L, Winikoff, M (2005) Developing intelligent agent systems: A practical guide John Wiley & Sons. Pallot M, Le Marc C, Richir S, Schmidt C, Mathei J-P (2012) Innovation gaming: an immersive experience environment enabling co-creation. In: Handbook of Research on Serious Games as Educational, Business and Research Tools, Cruz-Cunha, M ed. IGI Global, Hershey: p 1–24. Perez I J, Cabrerizo E J, Herrera-Viedma L (2010) A mobile decision support system for dynamic group decision making problems. IEEE Transactions, 40(6): 1247–1256. Peng Y, Kou G, Shi Y, Chen Z (2008) A descriptive framework for the field of data mining and knowledge discovery. International Journal of Information Technology & Decision Making, 7 (04): 639–682. Petcu D, Macariu G, Panica S, Crăciun C (2013) Portable cloud applications-from theory to practice. Future Generation Computer Systems, 29(6):1417–1430. Pocatilu P, Boja C, Ciurea C (2013) Syncing mobile applications with cloud storage services, Informatica Economică, 17(2): 96–108. Power D J (1998) Web-based decision support systems. DSstar, The On-Line Executive Journal for Data-Intensive Decision Support, 2(33–34). Power D J (2002) Decision Support Systems. Concepts and Resources for Managers. Greenwood/Quorum Press, Westpoint, CT. Power D J (2008) Understanding data-driven decision support systems. Information Systems Management, 25: 149–157. Power D J (2015) Creating a data-driven society. In: Proceedings SIGDSS Workshop and Reshaping Society through Analytics, Collaboration and Decision Support: Role of BI and Social Media, Milan: 1–11. Power D J (2016) “Big Brother” can watch us. Journal of Decision Systems, 25(sup1):578–588. Power D J, Kaparthi S (2002) Building web-based decision support systems, Studies in Informatics and Control, 11(4): 291–302. Power D J, Phillips-Wren G (2011) Impact of social media and Web 2.0 on decision-making. Journal of Decision Systems, 20(3): 249–261. Provost F, Fawcett T (2013) Data science and its relationship to big data and data-driven decision making. Big Data 1(1): 51–59. Rasmussen S, Mangalagiu D, Ziock H, Bollen J, Keating G (2007) Collective intelligence for decision support in very large stakeholder networks: The future US energy system. In: IEEE Symposium on Artificial Life (ALIFE ‘07),1–5 April 2007: p. 468–475. Raychaudhuri D, Mandayam N B (2012) Frontiers of wireless and mobile communication. In: Proceedings of the IEEE 100(4): 824–840. Richharia M (2014) Mobile Satellite Communications: Principles and Trends, 2nd edition. John Wiley & Sons, Chichister. Rigopoulos G (2015) A group decision support system for collaborative decisions within business intelligence context. American Journal of Information Science and Computer Engineering,1 (2): 84–93. Rigopoulos G, Psarras J, Askounis D (2008a) An aggregation approach for group multicriteria assignment, American Journal of Applied Sciences 5(8):952–958. Rigopoulos G, Psarras J, Askounis D (2008b) Web support system for collaborative decisions. Journal of Applied Sciences 8(3): 417–419. Ritterfeld U, Cody M, Vorderer P (eds) (2009) Serious Games: Mechanisms and Effects. Routledge. 174 4 Essential Enabling Technologies

Rosen D, Nelson C (2008) Web 2.0: A new generation of learners and education. Computers in the Schools: Interdisciplinary Journal of Practice, Theory, and Applied Research, 25(3–4): 211– 225. Rumney M ed (2013) LTE and the Evolution to 4G Wireless. Design and Measurement Challenges. John Wiley & Sons, Chichester, UK. Rutter R, Lauke P H, Waddell C, Thatcher J, Lawton Henry S, Lawson B, Kirkpatrick A, Heilmann C, Burks M R, Regan B, Urban M (2007) Web Accessibility: Web Standards and Regulatory Compliance, Apress. Berkley, CA. Sagiroglou S, Sinane D (2013) Big Data: A review. In: Collaboration Technologies and Systems (CTS), 2013 IEEE International Conference:42–47. Sallam R L (2013) Collaborative Decision Making. In: Hype Cycle for Business Intelligence and Analytics. Gartner Inc., Stanford, CT: p. 14–15. Sanner M F (1999) Python: a programming language for software integration and development. J Mol Graph Model, 17(1), pp. 57–61. Saquib Z, Salam N, Nair R P, Pandey N,, Joshi A (2010) A survey on automatic speaker recognition systems. In: Kim T-h. Pal SK, Grosky W I, Pissinou H, Shih T, Ślęzak D eds. Signal Processing and Multimedia. Springer, Berlin, Heidelberg: p 134–145. Sarkar T K, Maillaux R, Oliner A A, Salazar-Parma M, Sengupta D L eds (2006). History of Wireless. John Wiley and Sons, Hoboken NJ. Sauter VL (2014) Decision support systems for business intelligence. John Wiley & Sons, Hoboken, N.J. Schlegel K (2013) Hype Cycle for Business Intelligence and Analytics. Gartner Inc., Standford, CT. Schlegel K, Hostman B, Bitterer A (2007) Magic Quadrant for Business Intelligence Platform. Gartner Inc. Scott C R, Timmerman C E (2013) Communicative changes associated with repeated use of electronic meeting systems for decision-making tasks. In: Collaborative Communication Processes and Decision Making in Organizations, Nikoi E, Boateng K eds. IGI Global, Hershey: p 1–24. Segan S (2012). Enter the Phablet: A history of phone-tablet hybrids. PC Magazine. Sensory (2015). Truly Secure (available at: http://www.sensory.com/products/technologies/ trulysecure, accessed on 28.12.2015). Sharaf MA, Chrysantis PV (2004) On-demand broadcasting for mobile decision making. Mobile Networks and Applications, 9: 703–714. Sharma P (2013) Evolution of mobile wireless communication networks-1G to 5G as well as future prospective of next generation communications network. International Journal of Computer Science and Mobile Computing-IJCSMC 2(8): 47–53. Shaw M E (1976) Group Dynamics: The Psychology of Small Group Behavior (2nd edition), McGraw-Hill, New York. Shen C, Cai Z, Guan X, Du Y, Yu T (2013) User authentication through mouse dynamics, IEEE Trans. on Information Forensics and Security 8(1): 16—30. Shi Y (2014) Big Data history, current status and challenges going forward. The Bridge, 44(4): 6– 11. Shim J P, Varshney U, Dekleva S, Knoerzer G (2006) Mobile and wireless networks: services, evolution and issues. International Journal of Mobile Communications, 4(4): 405–417. Shokri R, Theodorakopoulos G, Papadimitriatos P, Kazem E, Hubaux J-P (2014). Hiding in the mobile crowd: Localization privacy through collaboration. Dependable and Secure Computing, IEEE Transactions on, 11(3): 266–279. Sicart M (2008) Defining game mechanics. , 8(2): 1–14. Sivashanmugam K, Verma K, Sheth A P, Miller J (2003) Adding semantics to web services standards, International Conference on Web Services, Las Vegas, NV, June 23–26. Song Y (2015) From offline social networks to online social networks: Changes in entrepreneur- ship, Informatica Economică, 19(2): 120–133. References 175

Stanescu I A, Ştefan A, Filip F G (2015) Cloud-based decision support ecosystem for renewable energy providers. In Technological Innovation for Cloud-Based Engineering Systems. Springer International Publishing: p. 405–412. Stanescu, I A, Baalsrud Hauge, J. Stefan, A, Lim, T (2015). Towards Modding and Reengineering Digital Games for Education. In: Proceedings of GALA Conference, Rome, Italy: p. 550–559. Stüber G L (2011) Principles of Mobile Communication. 3rd edition, Springer Science & Business Media, New York. Susi T, Johannesson M, Backlund P (2007) Serious Games: An Overview (available at: http:// www.diva-portal.org/smash/get/diva2:2416/FULLTEXT01.pdf accessed on 16.09.2016). Tene O, Polonetsky J (2012) Big Data for all: privacy and user control in the age of analytics. Northwestern Journal of Intellectual Property, 11(5): 239–273. Tistarelli M, Li S Z, Chellappa R (eds.) (2009) Handbook of Remote Biometrics for Surveillance and Security, Springer, Dordrecht. Tönnies F (1887) Gemeinschaft und Gesellschaft, Leipzig: Fues’s Verlag, translated in 1957 by Charles Price Loomis as Community and Society, East Lansing: Michigan State University Press. Unhelkar B, Murugesan S (2010). The enterprise mobile applications development framework, IT Professional, 12(3): 33–39. Uţǎ A, Ivan I, Popa M, Ciurea C, Doinea M (2014) Security of virtual entities. In: Proceedings of the 15th International Conference on Computer Systems and Technologies (CompSysTech ‘14), Boris Rachev, Angel Smrikarov eds. ACM, New York, NY, USA, pp. 278–285. Valacich J S, Looney C A, Wright R T, Wilson D W (2015) Mobile computing and collaboration, In: J. F. Nunamaker Jr, R. O. Briggs, N. C. Romano Romano Jr. eds, Collaboration Systems: Concept, Value, and Use, Routledge: pp. 143–164. W3C (2016). Help and FAQ, (available at: https://www.w3.org/Help/, accessed on 24.01.2016). Walsh G (2012) Employing co-design in the design process. In: Handbook of Research on Serious Games as Educational, Business and Research Tools, Cruz-Cunha M (ed). IGI Global, Hershey: p. 1048–1063. Wang L, Cheng Q (2006) Web-based collaborative decision support services: concept, challenges and application, ISPRS Technical Commission II Symposium, Vienna, 12 – 14 July: p. 79–84. Wang X, Gorlitsky R, Almeida J S (2005) From XML to RDF: how semantic web technologies will change the design of ‘omic’ standards. Nature Biotechnology, 23: 1099–1103. Wang X, Vasilakos AV, Chen M, Lu Y, and Kwon T (2012). A survey of green mobile networks: Opportunities and challenges. Mobile Networks and Applications, 17(1); 4–20. Wasserman A I (2010). Software engineering issues for mobile application development. In Proceedings of the FSE/SDP workshop on Future of software engineering research—FoSER 2010, November 7–8, 2010, Santa Fe, New Mexico, USA, ACM.: p. 397–400. WEF (2013) Unlocking the Value of Personal Data: from Collection to Usage. World Economic Forum, Geneva. Weiss R, Zgorski I J (2012) Obama Administration Unveils “Big Data” Initiative: Announces $200 Million in New R&D Investments. Office of Science and Technology Policy Executive Office of the President, March. Wietsma R T A, Ricci F (2005) Product reviews in mobile decision and systems. PERMID:15–18. Wirth R, Hipp J (2000) CRISP-DM: Towards A Standard Process Model for Data Mining. In: Proceedings of the 4th International Conference on the Practical Applications of Knowledge Delivery and Data Mining:29–39. Wouters P, Van Nimwegen C. Van Oostendorp H, Van Der Spek E D (2013) A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology, 105(2): 249. WWW (2015) Social Media Measurement Guidelines, September 15, 2015, Version 4.1 (available at http://mediaratingcouncil.org/Soc%20Guidelines%20v4.1%20Public%20Comment% 20Version.pdf, accessed on 02.02.2015). Zhao J, Wroe C, Goble C, Stevens R, Quan D, Greenwood M (2004). Using semantic web technologies for representing e-science provenance. The Semantic Web—ISWC 2004, 176 4 Essential Enabling Technologies

Proceedings of Third International Semantic Web Conference, Hiroshima, Japan, November 7–11, Lecture Notes in Computer Science, 3298: 92–106. Zhou S K, Chellappa R, Zhao W (2006) Unconstrained face recognition, Springer Science & Business Media. Zikopoulos P, Eaton C (2011) Understanding Big Data: Analytics for Enterprise Hadoop and Streaming Data, McGraw-Hill, Osborne Media. Zikopoulos P, De Ross P, Parasuraman K, Deutsch T, Giles J, Corrigan D (2013) Harness the Big Data: The IBM Data Platform, McGraw Hill, New York. Chapter 5 Applications

In previous chapters, we have addressed general concepts concerning collaboration, decision-making, and decision support systems, specific methods to be used by groups, and technologies that influence the development and usage of the infor- mation tools meant to support multi-participant collaborative decision processes. This chapter presents three applications selected with a view to illustrating a part of the ideas and results contained in the preceding chapters. In Sect. 5.1, we study, through a simulation experiment, the performances of facilitator schemes in the context of collaborative decision-making. Section 5.2 contains a practical applica- tion example of using data analysis in the particular context of labour market. The last section contains a rather detailed presentation of iDS, an evolving I&CT plat- form which integrates several of the advanced technologies described in Chap. 4.

5.1 A Practical Swarming Model for Facilitating Collaborative Decisions

In Sect. 3.4.3, we briefly introduced the two basic approaches that are currently considered in the research stream of collaboration engineering to automate the facilitation tasks: centralized and decentralized. This section presents an agent-based social simulation experiment to investigate the strengths and weak- nesses of each approach, when a collaborative decision making processes (CDMP) is modelled and executed during e-meetings. The experiment envisions a collabo- rative software tool that supports mass collaboration of a significant number of users to model and execute CDMP. For instance, an information system meant to support the collaborative work of the members of a directory board and their decision assistants to set an effective production and marketing plan is very much alike. Therefore, the tool is needed to support the conceptual representation of CDMP models (see Sect. 3.4.2) and the exploitation of experiential knowledge

© Springer International Publishing AG 2017 177 F.G. Filip et al., Computer-Supported Collaborative Decision-Making, Automation, Collaboration, & E-Services 4, DOI 10.1007/978-3-319-47221-8_5 178 5 Applications collected from the past deployment of CDMP sub-models in real business envi- ronments. Before tool prototyping described in by Zamfirescu and Candea (2012), we have studied the tool according to some basic principles of stigmergic system design (Parunak 2006) by means of social simulation the feasibility of engineering. In (Zamfirescu and Filip 2010; Zamfirescu and Candea 2011, 2013) we have remarked that: • the stigmergic coordination mechanisms allow to support the collective intel- ligence of facilitating the CDMP by exploiting the synergy of some contextual factors, such as frequency and diversity of the problems for which the GDSS is used, users’ experience with the GDSS, and the willingness to use in a creative way the functionalities available in many collaboration software tools; • the effectiveness of these mechanisms depends on how the model of a CDMP may be structured on different levels of abstractions and supports the integration of uncertainty associated with modelling and execution of CDMP; • the stigmergic coordination mechanisms implicitly leads to the contextual rec- ommendations for an improved CDMP design. This section synthesizes the work reported by Zamfirescu and Filip (2010)to give some insights into the above mentioned results. Basically, it complements the long-lasting field studies that are conducted in the mainstream of GDSS research.

5.1.1 The Concept of Stigmergic Coordination

Many social science theories, such as activity theory (Nardi 1996)orsituated and distributed cognition (Hutchins 1995), reveal the essential role of the environment (physical or artificial) to mediate the human cognition. In these theories the knowledge is externalized in a shared environment to prevent the human cognition to be exposed to the complexity of an open and dynamic environment. The basic mechanism to coordinate the humans’ actions through a shared environment has been conceptualized by many authors in terms of stigmergy (Susi and Ziemke 2001; Parunak 2006; Parunak et al. 2005; Rosen and Suthers 2011; Heylighen 2016). The concept of stigmergy was initially introduced by Grassé (1953) to describe the collective behaviour of social insects, but more recently it was generalised to describe an entire spectrum of indirect coordination mechanisms in which the trace left by an action in a shared environment stimulates subsequent actions. The cave painting used in ancient human society to communicate drivers’ knowledge and reaction to traffic conditions that affect the actions of other drivers are classic examples of stigmergic coordination. But the most obvious examples of stigmergy are found in the digital world. For instance, Parunak (2006), Elliot (2007), Heylighen (2016) showed that almost any collaborative support systems employed stigmergic coordination mechanisms to exhibit self-organization and emergent functionalities. Web 2.0 technologies, Google, eBay, Amazon, Wikipedia, crowdsourcing 5.1 A Practical Swarming Model for Facilitating Collaborative Decisions 179 applications are regarded by these authors as stigmergic systems, minimizing the need for anticipation, memory, communication, mutual awareness, simultaneous presence, imposed workflow and division of labour. All these advantages make the stigmergic models of computation an appropriate approach to support the facilitation of CDMP. A common way to support the cognitive process of constructing a model for a CDMP is to represent this problem as a composition of component models (i.e. the thinkLets concept presented in Sect. 3.4.1) linked in a weighted graph. The com- ponent models commonly stand for the possible states of the problem, while the links are the set of possible actions that guide the design decision from one state to another of the problem space. Structurally, this knowledge takes the form of a collective mental map (Heylighen 1999) comprising all the component models discovered and documented by the users’ community (i.e. “problem space” in Fig. 5.1). These pieces of information are dynamic in their nature and are created, structured, and refined in time by all the users who are indirectly interacting with the collaborative software tool, when they are modelling and executing a CDMP. Due to many practical constraints, it is unfeasible for a user to exhaustively explore the problem space. Consequently, when the user is designing a CDMP he will explore only a small part of the problem space (i.e. “solution space” in Fig. 5.1). At a certain point in time, during the CDMP design, a user is conceptually “located” in a component model from the problem space, performing one of the following basic actions:

Fig. 5.1 The collaborative software tool as a stigmergic environment for CDMP design (adapted from Zamfirescu and Candea 2011) 180 5 Applications

• evaluating the performances of the next possible component models that can be executed; • selecting the next best component model for further completing the solution for a CDMP; • executing the component model from the model; • updating the performance recorded on the link between the previous executed two component models. In this way we have the classical feedback loop of stigmergic coordination described by Grassé (1953), where an action produces a mark (i.e. the execution of a component model and its updated performance) which stimulates an action (i.e. the evaluation and selection of a component model), which produces another mark and so on.

5.1.2 The Computational Model and Its Implementation

In (Zamfirescu and Filip 2010), we have detailed an agent-based model that mimics the users’ conceptual ‘navigation’ over the semantic structure of the problem space for facilitating the e-meetings. The experiment presumes the availability of a software tool used to support the collaborative design of CDMP. This tool aims to support the conceptual representation of CDMP models (see Sect. 3.4.2) and the exploitation of experiential knowledge collected from the past execution of these models. The tool is considered a stigmergic system through which the users are interacting and coordinating their actions when a CDMP is designed. Therefore the agent-based model entails the description of: (a) shared environment; (b) agents’ behaviour; (c) several facilitation scenarios for CDMP, and (d) model implementation.

5.1.2.1 The Semantic Environment for Facilitating: The e-Meetings

The environment is the collaborative software tool that supports the conceptual representation of CDMP and records the experiential knowledge collected once a CDMP has been executed (Fig. 5.1). It acts as a stigmergic environment for col- laboration engineering, reflecting the problem space of facilitation the e-meetings. According to Parunak (2006), any stigmergic environment presumes the definition of its structure, states, and processes. To encode the experience of executing various CDMP models, we have used the collective mental map concept proposed by Heylighen (1999). A collective mental map is defined as a composition of component models (specialized knowledge represented as node), linked in a weighted graph. The graph is meant to support the design effort to reach any component model of the graph from any other compo- nent. For CDMP the basic decomposition unit is the thinkLet concept described in 5.1 A Practical Swarming Model for Facilitating Collaborative Decisions 181

Sect. 3.4.1. It stands for a possible state of the CDMP, while the edges among thinkLets are possible design actions that guide the CDMP modelling from one state to another. Note that this graph includes all the thinkLets recorded and experienced by the entire community of users. The edges record the performance of executing a thinkLet from the current state of a CDMP. The performances from all the graph’s edges describe the state of the environment over time and reflect the users’ experience of employing a thinkLet for a CDMP. In this way the semantic environment for facilitating the e-meetings records correlated information among the thinkLets and the users’ experience with their deployment in real-life conditions. Any stigmergic environment has its own processes that modify the state of the environment (Parunak 2006). In our simulation we apply a simple weighted additive rule to simulate the aggregation of performances:

PjkðÞTLk; t =PjkðÞTLk; t À 1 +UPjkðÞTLk =w ð5:1Þ where: t represents the temporal component of the model which is incremented by one for each successive use of the GDSS, k is the identification index of a thinkLet from the entire set of thinkLets used to model the CDMP, UPjk(TLk) is the user’s performance of the k-th thinkLet evaluated from the side of thinkLet j at moment t, Pjk(TLk, t) and Pjk(TLk,t− 1) are the new and previous values of the (collective) performance stored on the edge between the thinkLet j and k, and w is a tuning parameter, arbitrarily chosen, to weight the impact of the last user’s experience in deploying the thinkLet k.

5.1.2.2 The Users’ Behaviour

The agents are the users who interact with the envisioned collaborative tool to design CDMP. Hypothetically, at any stage, an agent is “located” in a node (i.e. the last executed thinkLet) of the problem space, performing one of the following basic actions (Zamfirescu and Filip 2010): • evaluating the preference for the next thinkLets that are going to be executed to achieve the group’s goal; • selecting, from the set of thinkLets evaluated in the previous action, the best one to further complete the CDMP; • executing the thinkLets earlier selected; • using some of the criteria detailed in Table 3.11 (see Sect. 3.4.1) to provide the performance measure for the executed thinkLets. In our agent-based simulation the evaluation is computed with the Eq. (5.1), while the selection with Luce’s axiom (Luce 1959) of modelling the human behaviour: 182 5 Applications

Xm Pjk ðTLk Þ=T PjiðTLiÞ=T pjk ¼ e = e ð5:2Þ i¼1 where pjk represents the preference for an alternative thinkLet, i.e. the selection probability of the thinkLet k (TLk) from the thinkLet j (TLk), i is the index of thinkLets connected from the thinkLet j (for simplicity reasons in our experiment all the m thinkLets that are available in the problem space are connected), and T is a parameter used to define the deviation from a pure rational behaviour. Note that the uncertainty associated with the construction of preferences is modelled in Eq. (5.2) with the T parameter, that range between zero (when selection is deterministic as is the ideal case of a perfectly informed decision) and one (when the selection is completely random as in the case of a completely irrational decision).

5.1.2.3 Facilitation Scenarios

The agents reflect the practitioners engaged in means-ends reasoning activities to design a suitable collaboration model meant to achieve the group’s goal (see Table 3.13 in Sect. 3.4.2). During the execution of the model they often need to refine the initial model, in order to reflect both the changes in the group’s goal (N.B. some CDMP may take months, or even years, to be concluded), and uncertainties in executing the model (e.g. if the intermediate deliverables are inappropriate). Furthermore, when the group’s goal is ambiguous or too complex for designing a complete model, the practitioners are defining intermediate sub-goals to be sub- sequently achieved. Consequently, the design of a CDMP model is often done incrementally and interleaved with its execution. In Table 5.1, there are defined three basic design strategies (DS) that corresponds to the two basic approaches, to automate the facilitation tasks (i.e. centralized and decentralized, see Sect. 3.4.3), together with a mixed approach. They are: • DS1 corresponds to the traditional use of GDSS, when the model for CDMP is predefined by the facilitator. In this case, the user provides a complete structure of the CDMP and its deployment does not need any refinement. It includes all the necessary details for the group activities (see Sect. 3.4.2). • DS2 relates to the circumstance when the user needs to redefine the remaining part of a model (not yet executed), due to uncertainties associated with its

Table 5.1 Different modelling and execution strategies for CDMP (adapted from Zamfirescu and Filip 2010) Design strategies Execution Group’s goal DS1 Certain Stable DS2 Uncertain Stable DS3 Uncertain Unstable 5.1 A Practical Swarming Model for Facilitating Collaborative Decisions 183

deployment. In this case the group’s goal is stable, but the group activities have uncertain outcome. • DS3 corresponds to the situation when the user needs to redefine some parts of the model (i.e. the sub-models for the group activities related to sub-goals), due to either uncertainties associated with its deployment, or changes of the group’s goal. In this case the users are expected to identify intermediate sub-goals that are subsequently dealt with as in the means-ends strategy proposed by Simon (see Sect. 3.4.2).

5.1.2.4 Implementation

The agent-based model detailed before was implemented in the Netlogo multi-agent simulation environment (Wilensky 1999). In the implementation, the users (turtles in Netlogo) are to engage themselves in the facilitation of e-meetings, trying to define CDMP by moving in the conceptual graph of thinkLets. The nodes and edges of the semantic environment are implemented also as turtles. The number of thinkLets that compose the graph may be configured from the interface (the num- TLs variable in Fig. 5.1), while their performance is preset with random values between zero and one when the experiments are initialized. Note that the NetLogo implementation includes additional variables that are relevant for other experi- ments, such as the one reported in Zamfirescu and Candea (2011).

5.1.3 Some Experimental Results

To evaluate the facilitation strategies of CDMP, we conducted a virtual experiment following the research methodology proposed by Carley (1999). The agent-based model was firstly validated with respect to the results reported in traditional ethnographical studies. Next, for each modelling and execution strategy defined in Table 5.1 (the Planning-degree variable from Fig. 5.2), we simulated 100 suc- cessive facilitations cycles, denoted as iterations in the NetLogo implementation. An iteration presumes the following phases (see also Fig. 5.1): • finding a suitable thinkLets-chain by successively using the Eq. (5.2) for each composite thinkLet; • executing the identified CDMP and assess its performance by averaging the performances of the thinkLets that compose the CDMP; • evaluating the CDMP by updating with the Eq. (5.1) the performance of each thinkLet from the CDMP. In the results from the following subsections we considered the problem space of facilitating the e-meetings being composed of 70 thinkLets, where the facilitators should find a CDMP composed of five thinkLets. Additionally, to favour a faster 184 5 Applications

Fig. 5.2 The interface of the model in the NetLogo environment (Zamfirescu and Filip 2010) convergence rate of finding a suitable composition of thinkLets for the CDMP the parameter T from Eq. (5.2) is set to 0.7 (the pheromone-sensitivity variable from Fig. 5.2).

5.1.3.1 Impact of Facilitation Strategies on Performance

Figure 5.3 shows, for each facilitation scenario defined in Table 5.1, the aggregate normalized performance resulted from averaging the predefined performances of all the thinkLets that compose a CDMP composed of five thinkLets. As may be expected, the performance fits an exponential function, a typical behaviour for a stigmergic system (Guerin and Kunkle 2004;Bărbat et al. 2005; Parunak et al. 2005). Like any heuristic model, the stigmergic coordination mechanisms do not guarantee finding an optimal solution, but a near-optimal or acceptable one. Consequently, there are some variations in performance for the convergence values from one experiment to another. The convergence value is the performance of a CDMP which is not improving anymore despite the increasing number of iterations. For instance, Fig. 5.4 depicts in Whiskers diagrams the distribution of convergence value obtained from 30 experiments for the performance of DS1, DS2, and DS3 (see Table 5.1) facilitation scenarios. The three facilitation scenarios show different performances and convergence values. DS2 takes benefit from the prior experiences not only in relation with the entire model, as in DS1, but also from the intermediate sub-models that are 5.1 A Practical Swarming Model for Facilitating Collaborative Decisions 185

Fig. 5.3 The performance of DS1, DS2, and DS3 (Zamfirescu and Filip 2010)

Fig. 5.4 The distribution of convergence value for the performances of DS1, DS2, and DS3 (Zamfirescu and Filip 2010)

subsequently redefined, due to uncertainties associated with its execution. In addition, DS3 decreases the granularity by adding the opportunity to decompose the group’s goal, and consequently to exploit the prior experiences of executing an extended set of sub-models for the group activities. Therefore DS3 shows the best performance results and a low inconsistency among the identified solutions to model a CDMP. DS2 converges faster to an acceptable solution. Compared with DS1, it also shows a lower inconsistency among the feasible solutions due to the additional constraint of keeping a stable goal for the group during the modelling process. Summarizing, all these figures show the influence of different facilitation strategies that require various capabilities to structure the knowledge encoded in the software tool used to support the design of CDMP. As above mentioned, we have assumed in this experiment that the users are collaborating asynchronously through the stigmergic environment which supports the identification of the most appro- priate thinkLets-chain to model a CDMP. 186 5 Applications

5.1.3.2 Cognitive Complexity of Facilitation Strategies

The weighted relations between the thinkLets encoded in the semantic environment (see Sect. 5.1.2.1) entails a decrease of freedom when selecting a thinkLet for the CDMP design. This is a result of the emergence of contextual constraints (i.e. performance updates) that reduce the probability of a thinkLet to be selected (see Eq. 5.2) The degree of freedom corresponds to the probabilistic distribution of preferences for the selection of a thinkLet. As suggested by Guerin and Kunkle (2004), this distribution can be computed with Shannon normalized entropy:

Xm EðpjkÞ¼À pjk lnðpjkÞ= lnðmÞð5:3Þ k¼1 where: pjk—represents the selection probability of the thinkLet k from the thinkLet j; k—is the index of thinkLets connected from the thinkLet j that range from one to m (for simplicity reasons in our experiment all the m thinkLets available in the problem space are connected). When the recorded performance is the same for all the modelling alternatives, the user is practically assessing the entire set of thinkLets—the probabilities in Eq. 5.3 being equally distributed the normalized entropy will be one. Otherwise, when the recorded performances favour a single alternative, the user will have no “freedom” in selecting the best thinkLet—all the probabilities from Eq. 5.2 are zero, with the exception of the best alternative that is one, the normalized entropy will be zero. Therefore, the entropy associated with the selection of a thinkLet is seen as a measure of cognitive complexity for modelling a CDMP. Figure 5.5 shows the cognitive complexity for each facilitation scenario defined in Table 5.1. The data are obtained for the same experimental settings as introduced in the beginning of this section. The figure corresponds to the average of entropies for all five successive thinkLet selection actions needed to complete the CDMP.

Fig. 5.5 The normalized entropy of DS1, DS2, and DS3 (Zamfirescu and Filip 2010) 5.1 A Practical Swarming Model for Facilitating Collaborative Decisions 187

Similar to the distribution of performances from Fig. 5.3, the normalized entropy for DS2 and DS3 converges faster to zero than DS1.ForDS3, the gradual increase of representation complexity through the recursive definition of composite sub-models significantly reduces the cognitive complexity for modelling the CDMP. Nevertheless, it requires an increased semantic complexity to cope with the groups’ goal diversity for which the GDSS is used. Only in this way the synergy among partially overlapping CDMP may be effectively exploited.

5.1.4 Discussion and Concluding Remarks

With the advent of web technologies (see Sect. 4.2) and social networks (see Sect. 4.3), the simplicity of stigmergic models of computation has become a pop- ular approach to design decentralized systems that are developed to run in open, dynamic, and unknown environments. Inspired from the behaviour of social insects, the concept covers a broad spectrum of mechanisms able to generate an intelligent collective behaviour. Despite its behavioural simplicity (i.e. stimuli-response rules), according to Parunak and Brueckner (2009) a single stigmergic agent can emulate any Turing machine and can execute any symbolic or sub-symbolic algorithm proposed by the mainstream of AI research. The employment of stigmergic models of computation to automate the facilitation of e-meetings is trying to eliminate the obstacles met in the centralized approach (see Sect. 3.4.3): (a) the restrictions to codify the facili- tation knowledge into the computing system; (b) the lack of self-development capabilities for this knowledge; (c) the black-box perspective over a system dis- connected from the environment where CDMP are executed. The results from the agent-based simulation presented in this section and further detailed in Zamfirescu and Candea (2011) revel a great dependence on many con- textual factors that require an adequate scale of participation to make feasible a decentralized approach to automatic facilitation of e-meetings. Factors, like fre- quency and diversity of the problems for which the GDSS is used, balanced group composition in terms of experiential knowledge, willingness of group members to participate in the modelling activities and to provide feedback as regards the per- formance of CDMP after their execution, are not easily achievable in the organiza- tions where the hierarchical control structure is still predominant. For this reason in the real implementation of these results (see Sect. 5.3) we are considering a gradual deployment strategy by providing the system with some initial, frequently-used CDMP models, which can be further refined in time by the users community. Anyway, we expect that the traditional organization will be transformed in the near future into a social network that will allow cross-functional and cross-hierarchical communication, collaboration and knowledge sharing in real-time. In this heterar- chical structure, the people will not only be passive executants of CDMP, on the contrary, active participants in the design and execution of CDMP. 188 5 Applications

A research topic that is sharing some ideas with the approach presented in this section is the engineering of collaborative scientific workflows for virtual experi- ments (Rourea et al. 2009). It adopts a social network approach (see Sect. 4.3) for sharing the workflows’ design and reuse of gross-grained structures to solve a certain problem type. This approach works well for a small and very specialized community of users (e.g. the experts from a research domain) with a common understanding over the meanings of workflow execution. In real organizations, where the problems types are very diverse, a common view over the CDMP models would be very difficult to achieve. Therefore, the design and execution of CDMP requires leveraging the full potential of collective intelligence (i.e. diversity, cre- ativity, large-scale and long-term participation, etc.) with fine-grained workflows.

5.2 An Application of Data Mining to Decisions in Labour Market (By Claudiu Brândaş and Ciprian Pânzaru)

The basic aspects of Business Intelligence and Analytics (BI&A) and Big Data (BD) were presented in Sect. 4.1. The corresponding technologies have been deployed in various application fields, such as e-commerce, banking, insurances, health, manufacturing and so on. Recently, BI&A and BD have become increas- ingly useful in the labour market analysis (Askitas and Zimmermann 2015; Kreibich 2015; Kurekova et al. 2014). Early works related to the labour market addressed the unemployment phenomenon prediction (Bollier 2010). Other studies were focused on correlations between Google searches for the word “unemploy- ment” and unemployment data from official statistics (Hilbert 2013, p. 11). Based on this information, unemployment rates could be forecast for making various decisions (Larsen and Rand 2015). At present, the online job portals have become an important data source in the analysis of the labour market. Online job portals provide useful information for designing and implementing new models and tools for innovating Labor Market Intelligence and Services (Dusi et al. 2015, p. 31). In this context, a short discussion should be made about the Labour Market Information and the Labour Market Intelligence. The International Labour Organization (ILO) defines Labour Market Information as “any information concerning the size and composition of the labour market or any part of the labour market, the way it or any part of it functions, its problems, the opportunities that may be available to it, and the employment-related intentions or aspirations of those who are part of it” (Thuy et al. 2001, p. 57). In some situations, the concept of Labour Market Intelligence is used to point out that the information has been already analyzed and reduced to the important and relevant facts for decision-making (Lantra 2005). To avoid confusion, a solution is to use LMI for both information and intelligence in the labour market (Woods and O’Leary 2006). An LMI provides quantitative and qualitative information and intelligence, to help labour market agents in their decision making. Recent development inserts the concept of Labour Market Analytics (Schneider and Deane 2014). 5.2 An Application of Data Mining to Decisions … 189

The majority of new developments can be noticed in the software and recruiting companies. For example, the Labour Market Intelligence Predictive Analytics Platform (LMI-PAP), which is a granular reporting cloud-based intelligence and analytics software product, focused on labour market metrics developed by the company Focal HR (http://focalhr.com/). In a similar way, the firm CarerBulider.com (http://www.careerbuilder.com/) released a machine learning-based semi-supervised job title classification system for the online job recruitment domain (Javed et al. 2014). These tools use sophisticated technologies to convert unstructured data into mean- ingful analytics, to allow the implementation of a real-time data collection and labour market information (RT-LMI) service from web portals. The objective of this type of analytics is the reduction of the Time to Answer (TtA) and Data-to-Decision (DtD). In the rest of this section, we present a framework for construction and imple- mentation of a Labour Market Decision Support System (LM-DSS) based on web mining, data mining and spatialization (spatial mapping of results). LM-DSS is meant to provide information and knowledge for decision-making processes carried out by diverse users, such as job seekers, researchers, consultants or business and governmental policy makers, educators and trainers institutions, public and private employment agencies and social services. A preliminary version of this section was presented by Brandas et al. (2016).

5.2.1 A Framework of a Labour Market Decision Support System (LM-DSS)

LM-DSS is meant to provide information and knowledge about dynamics, such as job vacancies and skills, their concentration on territorial units and economic activities, the most requested skills, education and training programmes, unem- ployment and labour force statistics, wage information and projections. The con- ceptual model of the system is presented in Fig. 5.6. The [open source] software tools used in LM-DSS are: • Import.io (https://www.import.io/) is a software tool that allows the extraction and conversion of semi-structured data into structured data. The collected data can be exported as CSV (Comma-Separated Values), Excel, Google Sheets or JSON (JavaScript Object Notation). • Waikato Environment for Knowledge Analysis (WEKA) is a machine learning software for data mining processes. It contains tools for data pre-processing, classification, regression, association rules, clustering and visualization (http:// www.cs.waikato.ac.nz/ml/weka/). • Google Fusion Tables (https://support.google.com/fusiontables/answer/2571232 ?hl=en) is an experimental data visualization web application that allows the gathering, visualization, and sharing of data tables using Google Maps. 190 5 Applications

Fig. 5.6 Framework of a labour market Decision Support System (LM-DSS)

One of the main ways to get information about labour market is based on the extraction of knowledge from large unstructured data collection (Brândaş and Pânzaru 2015). Web mining and data mining of online job ads could be a valuable way to get real-time labor market information (Dorrer 2015). The extraction and processing of data through web mining and knowledge discovery through data mining represent the effort to understand, analyze and, possibly, use a huge amount of available data (Fayyad et al. 1996).

5.2.2 Example

In our application example to be presented in the sequel, we focused on data and information about academic job vacancies, extracted from the academic job portals. Consequently, we scraped data from the several websites, such as: www.dk. academicpositions.eu, www.ge.academicpositions.eu, www.it.academicpositions. eu, www.fr.academicpositions.eu, www.academicjobseu.com, www.universitypos itions.eu, www.academics.com, www.jobs.ac.uk/categories/academic-jobs-europe, www.eurosciencejobs.com/jobs/academic, www.unijobs.com, https://www.akade us.com, www.careeredu.eu.

5.2.2.1 The Mining Procedure

There are four main steps: 1. The web content is used to extract and structure data. 2. The duplicates are eliminated. 3. The data objects (records from a data source) resulted from carrying-out the previous step are processed by using data mining clustering techniques (Berkhin 5.2 An Application of Data Mining to Decisions … 191

2006) and a simple unsupervised k-means clustering algorithm (Hartigan and Wong 1979, p. 100). It is an algorithm for classifying or grouping data objects based on their attributes into k clusters by minimizing the Euclidean distance between each data objects and clusters centroids (mean vectors). The method involves starting with k values (possibly randomly chosen), and building the initial k clusters centered on them. The clusters are built then in an iterative way (Teknomo 2007). At each iteration, the operations to be performed are as follows: • for each cluster (group of data), the coordinates of the centroid (center of cluster) are determined. • the squared Euclidean distances of all objects to the centroids are calculated. • the (group) objects are allocated to clusters based on their minimum distance to the centroids. • the cluster centroids are recalculated and objects are reallocated. The iterations continue until all Euclidean distances are minimized and no further move (minimization of a distance) is possible. 4. The results were spatialized for a better visualization. In the Web Content Mining process using Import.io, we applied the Extractor function on the research and academic jobs websites and obtained a data structures: research_and_academic_jobs_data_set table with the following attributes: job_title, location (see Fig. 5.7).

5.2.2.2 The Analysis Process

By using weak and simple k-means clustering algorithm, the following clusters were obtained for the research_and_academic jobs_data_set. There were 10 clusters with one instance for each cluster representing the cluster centroid (mean vectors for each cluster) shown in Table 5.2. Cluster #6, contains the majority of instances (18 %). It is followed by the Cluster #4, which contains 14 % of the total number of instances. In Cluster #0 and #1 there are the fewest instances (5 %).

Fig. 5.7 Web content mining using Import.io 192 5 Applications

Cluster analysis using weka shows the densities of the most academic required jobs by each country. The results obtained in WEKA can be analyzed from a least two perspectives: (a) the diversity of the job demands and (b) quantity of jobs offered. The United States and Sweden stand out as the countries with the largest offer of jobs available, which is also very diversified. We find that the most diverse job offers are in the United States, fact which corresponds to multiple research opportunities, due to a large number of research institutes. It should be noted that four clusters out of ten (namely clusters #4, #5, #8, and #9) are placed in the United States. We have also noticed a large number of academic vacancies in the Netherlands. In Europe, a significant number of academic vacancies are in France, Germany and Luxembourg. Australia is another country with an important academic job demand.

5.2.2.3 The Data Spatialization

Fusion Tables and Google Maps have been used for a better visualization of the results. Figure 5.8 shows the spatialized data for the academic labour market demand. The decision-makers could analyze the above results, taking also into account additional information, such as academic domain, expenditure on education, brain drain phenomenon, unemployment and so on. Determining the labour force needs and the type of employees are important issues not only to eliminate disfunctions in the labour market, but also to develop strategic economic advantages. The results could be useful for a wide category of decision makers. Thereby, the results could be used by local and central public administration bodies to develop labour market policies, to education and training providers to calibrate their supply

Table 5.2 Academic jobs (weka) clustering results Attribute Cluster Cluster #0 Cluster #1 Cluster #2 Cluster #3 Cluster #4 Location Luxembourg Germany France Netherlands United States Job title Postdoc in Research Research Senior Clinical Mathematics fellowships department lecturer instructor/clinical chair professor Attribute Cluster Cluster #5 Cluster Cluster #7 Cluster #8 Cluster #9 #6 Location United Sweden Australia United States United States States Job title Lecturer Lecturer Research Research Research assistant scientist/engineer associate Clustered instances: 0 492 (5 %), 1 490 (5 %), 2 931 (9 %), 3 1242 (12 %), 4 1521 (14 %), 5 1354, (13 %), 6 1875 (18 %), 7 955 (9 %), 8 947 (9 %), 9 726 (7 %) 5.2 An Application of Data Mining to Decisions … 193

Fig. 5.8 Data spatialization of the academic jobs to the market needs, to jobseekers to find a job, to economic players to make investment, relocation, development etc. The results could be also used to identify decision trees for modeling career paths.

5.2.3 Comments

In this section, we have shown that big data analysis and extraction of knowledge from data collection could be successfully used to analyse labour market. In this context, the Labour Market Decision Support System can be a solution for advanced the Labour Market Analysis. The next level in analyzing of the Labour Market should be based on the idea of Smart Labour Market (SLM), which is integrated in an intelligent way the actors (the decision-makers), technology and information (Fig. 5.9). The main charac- teristics of the Smart Labour Market for decision makers are: personalized infor- mation, meaningful, adaptive and relevant. One may view Labour Market Decision Support System as a part of the Smart Labour Market. Regarding the data processing, several strengths and weaknesses have been noticed. One of the strengths is the capability of processing large amounts of data, which, once collected and sorted, identification of patterns by data mining are enabled without the need of using human and logistical resources for data collection through field researches. Since we have often had to deal with unstructured or semi-structured data, the solution is to use the software applications, such as web scrapers, web spiders, crawlers or web data extractors. These applications are often specific programs especially designed for a particular data source and have the ability to retrieve structured data. 194 5 Applications

Fig. 5.9 Smart labour market

A weakness in the privacy policies of websites has also been noticed. Some websites allow extraction of data, while others are very strict in this way. Probably, the most important limitation is generated by the diversity of sources of information and attributes values. This explains the lack of structural and semantic compatibility of data which occurs frequently. In the future, ontologies, web semantics and web 3.0 could be used as a good solution to solve this problem.

5.3 iDecisionSupport Platform (By Ciprian Cândea)

In Sect. 2.4.2.3, we announced the usage of integrated platforms to serve as DSS. In this section we describe a practical case which illustrates the deployment of several technologies presented in Chap. 4. A previous version of the text was published by Cândea and Filip (2016). In the last decade, the Romanian company Ropardo (http://ropardo.ro/) has conducted research and development activities relating to the decision support domain, analyzing different theoretical ideas, testing multiple algorithms and dif- ferent technologies which were involved in our software solutions. The I&CT 5.3 iDecisionSupport Platform … 195 platform that supports Ropardo activities on the decision support domain—iDS (iDecisionSupport) is designed to be easy to use, to avoid the need for complex training, especially to prevent user rejection. iDS provides a collaborative working environment where team members create and attend different types of decision sessions, named work sessions. It supports collaborative and group decision ses- sions as well as individual decision sessions in a way that users on each decision stage are free to collaborate with other users. From a technical point of view, iDecisionSupport has been developed as a “framework for decision support tools that provides a collaborative environment, where different software tools for decision making can be easily integrated while the users can access them remotely and asynchronously” (Georgescu et al. 2007). To actively support user needs, the iDS platform facilitates each step of the decision making process—intelligence, design, choice, implementation and review (see Sect. 2.1). For each of these phases, at least one software tool is provided so that the user could select and use the most appropriate one. To ensure a full flexibility and to respond to various application specific needs, iDS allows integrating third party tools. The default set of tools proposed are: the discussion list (a forum-like tool for discussions), voting (a tool that allows grading or expressing the agreements over a set of issues), electronic brainstorming based on the Issue Based Information System (IBIS) approach (Conklin 2003; Conklin and Begeman 1988; Mackenzie et al. 2006), Mind Map (Buzan 2005), and Categorization (Sebastiani 2002). The decision model that is implemented on the iDS platform is based on the Shared Plans theory (Grosz and Kraus 1996) and it was tested for the first time as a software prototype in 2001, as described in (Zamfirescu et al. 2002).

5.3.1 The Concept

5.3.1.1 Terms and Definitions iDS platform uses terms as projects and plans, workflows, sessions and tools as there are shown in Fig. 5.10. Reporting functionality is present at all iDS levels and gives users the possibility to search and extract data from application at any level of details, allowing the analysis of each aspect of the decision-making process. At project and plan level, definition of members’ rights within the project; its goal and duration are defined. The workflow is made up of a succession of sessions, with the possibility of being pre-defined or revised during the decision-making process. The session aim is to define the timeframe, members and the tool that is most repre- sentative for its purpose. The configuration functionality for this tools will also be provided. The tool is the key element for a session, being used to reach the goal for which the session was created. It can have its own interface and function in iDS context, or it can be a stand-alone application that integrates with the iDS API (Application Programme Interface). 196 5 Applications

Fig. 5.10 iDS concepts

5.3.1.2 The Decision Making Process Model

Selecting the best solution supposes in most cases passing through the steps of the decision process, such as intelligence, design, choice, implementation and review. provides users with a hierarchical decision making model where users can organize decision processes as a tree structure. A leaf of the tree represents one decisional session that involves a tool. On top of leaves, any level of structure can be defined by starting with the root that can be represented by a plan or project with any levels of sub-plans. In Fig. 5.11 it is presented an example of iDS tree structure where Project A has two plans: Plan A and Plan B, each of them showing two key characteristics of this model: 1. A sub-plan or a plan may have several sessions that can run, in time, sequen- tially or in parallel. 2. Sessions can be further organized using sub-plans. Because the model may evolve in time, it is not necessary for each leaf (sub) plan to have a minimum of one session. As the time is a valuable resource in any decision process, iDS is representing time in all decision session phases. A decision session is defined as the period of time allocated to a specific decision making activity. 5.3 iDecisionSupport Platform … 197

Fig. 5.11 iDS tree structure of the decision making process

iDS models the decision session as a sequence of precise activities such as draft, commit, work and report (Fig. 5.12). For each phase, a precise time frame must be allocated, except the Report that can be generated any time after work phase is closed.

Fig. 5.12 iDS session phases 198 5 Applications

Once the user considers using the iDS platform, he has to create a new decision session, in the draft activity. The user, as an author, is the only one that can view/access the session and he/she must configure it, namely describe the problem, decide whether he/she needs a commitment phase or not, and select his/her team members, as well as decide which tool is to be used. As soon as the user defines and configures the session, he/she is publishing the session and makes it known to the team members and, in this moment, the decision session is starting. If the com- mitment phase was selected, team members are informed regarding the time deadline of the phase. During the time window available, the users discuss/negotiate important aspects for the session such as exchanging supporting documents, accepting or denying to proceed to the next activity. Commitment automatically ends when time window expires and all users are informed about the conclusions and the team members are invited to the main work phase. During work activity, the session participants are using a decision support tool. Each session has only one decision support tool bind to it. Choosing a certain decision support tool (during session configuration) estab- lishes the type of the session (i.e. brainstorming sessions, voting sessions, etc.). When time window for work activity expires, the system notifies all users with the result of the session and all reports are prepared. Workflow engine built in iDS platform is meant to manage the session flow and provide the flexibility needed for such a complex process.

5.3.2 Current Version

Since the creation of the first version of iDS, technologies, software engineering techniques and computation power have evolved. Current iDS platform takes advantage of web 3.0 technologies to support collaborative work. It also integrates social network models into DSSs. The iDS platform implements a modular archi- tecture that enables integrating third party tools through modern APIs (Application Program Interface). It also facilitates asynchronous decisions accessible through web 2.0 clients or dedicated mobile clients. Based on this system architecture, the iDS can be distributed as: BaaS (Business as a Service), SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) as in Fig. 5.13 (Radu et al. 2014). The functional characteristics of each variant are: • SaaS (Fig. 5.13b): software modules and all decision support tools as services for the customer company; • PaaS (Fig. 5.13c): APIs are accessible, new tools may be plugged in the iDS platform, and new GUIs (Graphic User Interface) or mobile clients can be deployed; • IaaS (Fig. 5.13d): custom deployment models may be established at this level, for each specific customer. Due to platform flexibility, different decision 5.3 iDecisionSupport Platform … 199

Fig. 5.13 iDS levels. a BaaS level; b SaaS level; c PaaS level; d IaaS level (Radu et al. 2014)

processes can be defined to respond to specific business models such as: managing meetings, performing analysis regarding customer needs or project’s costs and benefits, risk assessment and so on.

5.3.2.1 User Roles

The system supports three main user roles: facilitator, active, and observer. The roles are valid for each decisional session as follows: • The facilitator role allows a user to perform all operations on each of the above entities. Implicitly, manner, the iDS user is a facilitator for his/her decision tree (including all decision sessions on that tree) but he/she plays an implicit active role for any decision session where he/she is invited. • The active role enables the user to participate in the decision session. However, the user is not allowed to configure the session. He/she can only perform actions, such as voting, commenting, adding ideas or any other specific decision session action. • The observer role means that the user can only watch what happens in the decision session, without having the right to actively participate. One special function is anonymity of the user. When this function is activated for a specific decision session, in the work phase, all user inputs (i.e. voting prefer- ences, proposed ideas during brainstorming, and so on) are expressed in an anonymous manner and no indication to identify the person is available. iDS does not store this data at all. 200 5 Applications

5.3.2.2 Platform Description

The iDS server has three pillars (Fig. 5.14): (a) server, (b) clients, and (c) decision support tools; any of them can be distributed across the cloud. The server is the central component that handles the decision sessions, registers tools, provides access rights for users, and assures global security of the system, including REST (Representational State Transfer) APIs; communication is encrypted using SSL (Secure Socket Layer) protocol. iDS clients are provided access, via a GUI interface, to the entire system. There are three types of clients grouped into two categories: • Clients for administrative proposes allowed to configure and administrate the entire iDS platform, namely: – The web admin client; • Clients for end-user proposes: – Web 2.0 Client; – Mobile Client, for mobile operating systems such as Android and iOS. Decision support tools implement various parts of the decision process. They can be used to model complex decision workflows. The tools can be integrated with the iDS core part through iDS Tools Connector. Each of the above components (including individual tools) can reside anywhere on the Internet. The integration is done through REST API web services of the iDS server.

Fig. 5.14 The iDS platform architecture 5.3 iDecisionSupport Platform … 201

5.3.2.3 The IDS Server

The iDS server is the central part of the system and presents a decentralized architecture (Fig. 5.15); modules designed with high scalability in mind. One particular aspect of the architecture is the “plug in” feature of the cloud tools to implement the tool as a service concept. iDS provides a platform for decision support system where different tools can be used if are accessible on the cloud. Consequently, it is possible to create a “tool marketplace” where different providers sell their special decision supporting tools. A dedicated architecture, definition of interfaces between iDS platform and tools (Fig. 5.16) is the supporting “plug in” functionality. The server and tools can exchange information using syntax (message format) and semantics (message meaning). For the moment, the system is not implementing a communication standard, such as KQML—Knowledge Query and Manipulation Language (Finin et al. 1994), but uses XML and XSLT, a language for trans- forming original XML documents into other XML documents. Clark (1999) for encoding and interpreting the syntax and the semantics. To register a new tool into the iDS platform it is necessary to follow a manual or automatic registration procedure. In both cases the tool must provide to the server through the iDS tools with API import data structures serving to: configuration parameters, result structure as well other semantic data. The key data for registering a new tool with iDS platform are: • URL, where is responding—can be any Internet location; • Help content, the tool help content (as CSS/HTML); • Tool connections, a list of connections of the tool with other tools. This implies that the registered tool will be able to pass its results to other existing tools. For

Fig. 5.15 The iDS server architecture 202 5 Applications

Fig. 5.16 The iDS tool connector architecture

this, an XSLT file (which will handle the XML results’ transformations), must be provided; • Tool configuration, as XML and XSD, the schema for the tool configuration; • Tool result, as XML and XSD, the schema for the tool result; • Report template, for the tool results, so that the iDS server can properly generate a report. The server stores the tool intermediate and final results in XML format. Each tool may use the standard iDS XML format or may use its own format, when it must provide a XSLT transformation from and to its own format. One of the server’s functionalities is to store these XSLT transformation files as connections between any two different tools. If both of the tools use standard results, then a transformation file is not necessary. It can be provided if there are any specific conversions to be done. Based on the above technical implementations, any session can take as input the results of one or more previous sessions. In this way, one can easily create a chain of sessions (Fig. 5.17). All these structures saved on the server side will be used in communication process (as seen in Fig. 5.18) with the 3rd party tool. For easy integration, a library is published as “open source”—iDS tool Connector (ITC)—that allows any pro- vider to learn fast how to integrate it with the platform.

5.3.2.4 The Graphical User Interface of Web Client

Based on the experience gained during the implementation of iDS platform in different organizations (Zamfirescu et al. 2002; Candea et al. 2012) and based on users’ interviews/feedback received after 6 month of platform usage, two features were obvious: 5.3 iDecisionSupport Platform … 203

Fig. 5.17 iDS server-tool communication protocol sample

Fig. 5.18 The iDS web client

1. Need to have smart clients to interact with the system; 2. User Interface Terminology must be adapted to something that user is familiar. The first version of iDS was created with a Web GUI that was representing a cutting edge approach for such an application domain. At present, Web 2.0 iDS Client application is available for users who can easily access all platform func- tionalities. The actual Web 2.0 iDS Client is pushing rich functionalities to the users 204 5 Applications that facilitate them to focus on the decision process and less on the supporting aspects such as discussion forum, to do list, tree like categorization, project defi- nition, discussion list, file management, members management, calendar manage- ment, social network, etc. All of the features above are forming iDS Web 2.0 “smart client”. Based on the feedback, plans were named objectives (sub-plans were named sub-objectives) and sessions were called meetings. The users consider that such terms are easier to be adopted. iDS GUI is coming with a predefined action flow: define projects with objectives; to reach the objectives, the users attend multiple online meetings that are enriched with decision support tools. In Fig. 5.18, it is presented a part of the functionalities: project that is called VFF has several plans (objectives) defined; objectives contain sessions (meetings); one session is bound to a particular decision support tool (e.g.: Action Plan, Vote, SWOT). Other functionalities provided by this GUI may be noticed: the user has a customizable profile, has access to a personal calendar (where iDS meetings are automatically displayed) and may exchange ideas and share information, from key areas of the application, by having access to forums and discussion lists; the iDS system provides functionalities specific to Collaborative Platforms.

5.3.2.5 The IDS Tools Connector

By using the iDS Tools Connector (ITC), any decision support tool (web based or client-server) can be connected/ communicate with the iDS platform. It essentially defines a communication protocol between iDS and the tools, based on two APIs: one for the iDS platform and another for the tools (Fig. 5.19). The iDS API allows the iDS server to enroll—command—unregister a tool. ITC implement all the sequential phases—configuration, run and report—needed for a decisional session. There are three separate types of behavior that the tool must show, depending on the decision session status. Each of the three phases: Work, Configuration and Report have their own GUI and their own functionalities. Business logic is producing output results and once available, they are com- municated to the iDS server. Ontology (Fig. 5.20) was defined for the interaction and communication between any tool and the iDS server. This approach allows all iDS tools to have common semantics. The Tool API has two parts. Firstly, there is a programming interface that contains a list of operations through which the ITC can request the tool to perform actions. The requests coming from iDS are forwarded, from the server, to the tool, via this interface. Secondly, there is a web service that allows the tool to send data to the iDS server, asynchronously. We successfully integrated so far, decision support tools like: Vote, Brainstorming, SWOT (Fig. 5.21), Action Plan, Mind Map and Categorizer. 5.3 iDecisionSupport Platform … 205

Fig. 5.19 ITC architecture

Fig. 5.20 Message structure 206 5 Applications

Fig. 5.21 Tool message structure

5.3.3 The Evolution

As presented in (Keen 1980; Filip 1995, 2011, 2012; Power and Phillips-Wren 2011; Borne et al. 2013, p. 295), the systems meant to support decisions evolve under the influence of various factors such users’ requirements and skills, tech- nology development, usage practice and results and so on (see also Sects. 2.3.3 and 2.4.2). iDS can illustrate such an evolution too. Since 1999, when Ropardo Research and Innovation started investigation on how to create a practical decision support system, up to the present “cloud time”, iDS was implemented in different organizations, also served as a research instrument. In 2001, an integrated agent-based model for GDSS supporting explicit repre- sentation of the decision-makers role, the procedural and contextual settings along with the group commitment to share a plan of actions as a way to achieve a common goal, results in improved capabilities, range and flexibility of GDSS” was presented (Zamfirescu et al. 2002). During the time period 2006–2007, the system was adapted to academic envi- ronment by using “several intelligent software tools`` that assist the process of quality assurance and management, such as students’ performance indicators extractor, electronic voting for the selection of grants proposals, quality evaluation questionnaires manager (questionnaires generator, distributor and analyzer (Oprean et al. 2009). The GDSS system was integrated with eUNIV (Candea et al. 2008), a project that transferred an e-business solution of knowledge management to the academic environment and on the university information system. Latter, in 2007, the iGDSS software framework for decision support systems focused on developing a conceptual tool where any third party can contribute with creative ideas for modelling the decision process (Georgescu et al. 2007). 5.3 iDecisionSupport Platform … 207

The iGDSS system was implemented for public administration based on eCollaborative Decision solution developed for the academic and public adminis- tration (years 2006–2008). A successful integration with the m-Business solution that addresses Small and Medium Enterprises (SME) was realized, in order to adapt the solution for a new market (Oprean et al. 2002; Radu et al. 2014). Adapting the iGDSS for deployment in manufacturing industry was a task that started in 2006 with Digital Factory (DiFac) project (Cândea and Cândea 2011; Cândea et al. 2012). DiFac aims at the development of an innovative Collaborative Manufacturing Environment (CME) for next generation digital manufacturing (Sacco et al. 2007). Latter in 2012, the iDS platform was successfully integrated into a complex Virtual Factory Framework (Jain 1995) environment and get implemented in different manufacturing factories in Europe (Sacco et al. 2009). In 2014, iDS was presented in an article published in Neurocomputing (Zamfirescu et al. 2014) exposing the latest development in the iDS platform related to Group Decision Process Design (GDPP) as “a human-computer interaction engineering approach to design a software prototype that provides personalized, contextual and actionable recommendations for the GDPD”. In time, software solutions have been evolving, starting from a simple client-server implementation, passing to the first web version and now is taking advantages of cloud computing and becoming a complex software platform as complex as an ERP. The name of the software also has evolved, from GDSS (Group Decision Support System), passing to iGDSS (intelligent Group Decision Support System) to iDSS (intelligent Decision Support System) and ending to iDS platform (intelligent Decision Support). Future research will include Cloud Computing for GDSS as research domain, and in particular, how iDS platform can benefit from these technologies that are already on the market. Quick scale of computing capabilities, resource pooling will allow more sophisticated decision supporting tools to be deployed over Internet on different platforms to include mobile devices, such as i-phones, tablets), desktop and laptop computers (see Sect. 4.4) as well as industrial devices. Incorporating a library of optimization modules based on fuzzy sets theory (Herrera and Herrera-Viedma 1996; Precup and Hellendoorn 2011) is also envisaged.

References

Bărbat B E, Negulescu S C, Zamfirescu C B (2005) Human-driven stigmergic control. moving the threshold. In: Proceedings of the 17th IMACS World Congress - Scientific Computation, Applied Mathematics and Simulation, Paris: p. 86–92. Berkhin P (2006) A Survey of clustering data mining techniques. In: Grouping Multidimensional data. Recent Advances in Clustering, Kogan J, Nicholas C, Teboulle M eds. Springer Berlin Heidelberg, Berlin: p 25–71. Bollier D (2010) The Promise and Peril of Big Data. Aspen Institute, Washington DC. Borne P, Popescu D, Filip F G, Stefanoiu D (2013) Optimization in Engineering Sciences. Exact Metods, J. Wiley & Sons, London. 208 5 Applications

Brandas C, Panzaru C (2015) Analysing of the labour demand and supply using web mining and data mining in Romania. In: Big Data and the Complexity of Labour Market Policies: New Approaches in Regional and Local Labour Market Monitoring for Reducing Skills Mismatches, Larsen C, Rand S, Schmid A, Mezzanzanica M, Dusi S eds. Rainer Hampp Verlag, Munchen: p 71–86. Brandas C, Panzaru C, Filip F G (2016) Data driven decision support systems: an application case in labour market analysis. ROMJIST, 19(1-2): 65–77. Buzan T (2005) The ultimate Book of Mind Maps: Unlock Your Creativity, Boost Your Memory, Change Your Life, HarperCollins. Cândea C, Cândea G (2011) i-Portal and Factory Hu In: Digital Factory for Human Centred Production Systems, Caneta L, Flores M, Redaelli, C, eds, Springer, London: p. 271–282. Cândea C, Cândea G, Filip F G (2012) iDecisionSupport – Web-based framework for decision support systems, In: Proceedings, 14th IFAC Symposium on Information Control Problems in Manufacturing, INCOM 2012: p. 1117–1122. Candea C, Georgescu A, Candea G (2008) iPortal-Management Framework for Mobile Business. In: Proceedings of the International Conference on Manufacturing Science and Education MSE 2009, Sibiu, Romania, 4–6 June. Candea G, Candea C, Radu C, Terkaj W, Sacco M, Suciu O. (2012) A practical use of the Virtual Factory Framework. In: 14th International Conference on Modern Information Technology in the Innovation Process of the Industrial Enterprises, Budapest, Hungary. (Available at at https://igor.xen.emi.sztaki.hu/mitip/media/MITIP2012_proceedings.pdf#page=256%20(literal) accessed on 26.01.2016). Candea, C, Filip F G (2016) Towards intelligent collaborative decision support platforms. Studies in Informatics and Control, 25(2): 143–152. Carley K M (1999) On generating hypotheses using computer simulations. In: Proceedings of the 1999 International Symposium on Command and Control Research and Technology. Evidence Based Research, Vienna. Clark, J (1999) Xsl transformations (xslt). World Wide Web Consortium (W3C) (available at http://www.w3.org/TR/xslt accessed on 25.01.2015). Conklin J, Begeman M L (1988) gIBIS: A hypertext tool for exploratory policy discussion. ACM Transactions on Information Systems (TOIS), 6(4): 303–331. Conklin, J. (2003) The IBIS Manual: A Short Course in IBIS Methodology. Touchstone. Dorrer J (2015) Analyzing labor markets in real time. In: The university next door: what is a Comprehensive University, Who Does It Educate, and Can It Survive? Schneider M, Deane K C eds. Teachers College Press, Amsterdam: p 149–172. Dusi S, Mercorio F, Mezzanzanica M (2015) Big data meets web job vacancies: trends, challenges and development directions. In: Big Data and the Complexity of Labour Market Policies: New Approaches in Regional and Local Labour Market Monitoring for Reducing Skills Mismatches, Larsen C, Rand S, Schmid A, Mezzanzanica, M, Dusi, S. Eds. Rainer Hampp Verlag, Munchen: p 31–44. Elliott M A (2007). Stigmergic Collaboration: A Theoretical Framework for Mass Collaboration, PhD Thesis, Centre for Ideas, Victorian College of the Arts, The University of Melbourne. Fayyad U, Piatetsky-Shapiro G, Smyth P (1996) From data mining to knowledge discovery in databases. AI Magazine 17(3): 37–54. Filip F G (1995) Towards more humanized real-time decision support systems. In: Balanced Automation Systems: Architectures and Design Methods, Camarinha-Matos L, Afsarmanesh, H eds. Chapman & Hall, London: p. 230–240. Filip F G (2011) Designing and building modern information systems: a series of decisions to be made. Computer Science Journal of Moldova: 119–129. Filip F G (2012) A decision-making perspective for designing and building information systems. International Journal of Computers Communications & Control, 7(2): 264–272. Finin T, Fritzson R, McKay D, McEntire R (1994) KQML as an agent communication language. In: Proceedings of the third international conference on Information and knowledge management, ACM: p. 456–463. References 209

Georgescu V, Candea C, Zamfirescu CB (2007) iGDSS - Software framework for group decision support systems, In: Proceedings, The Bad and The Unexpected Conference, Sapio B, Fortunati L, Haddon L, Kommonen K-H, Mante-Meijer E, Turk T, eds. COST Action 298 Participation in the Broadband Society – European Science Foundation, Moscow, (available at: http://www.iis.ru/en/content/view/328/91/, accessed on 25.01.2016). Grassé P P (1953) La theorie de la stigmergie: essai d’interpretation du comportement des termites constructeurs, Insectes Sociaux,6:41–81. Grosz B, Kraus S (1996) Collaborative plans for complex group action. Artificial Intelligence, 86: 269–357. Guerin S, Kunkle D (2004) Emergence of constraint in self-organizing systems, Nonlinear Dynamics, Psychology, and Life Sciences, 8(2): 131–146. Hartigan JA, Wong, MA. (1979) Algorithm AS 136: A k-means clustering algorithm. Applied Statistics 28(1):100–108. Herrera F, Herrera-Viedma E., 1996. A model of consensus in group decision making under linguistic assessments. Fuzzy sets and Systems, 78(1), pp. 73–87. Heylighen F (1999) Collective intelligence and its implementation on the Web, Computational and Mathematical Organization Theory, 5: 253–280. Heylighen F (2016) Stigmergy as a universal coordination mechanism I: Definition and components, Cognitive Systems Research, Elsevier, 38:4–13. Hilbert M (2013) Big Data for Development: From information- to knowledge societies. SSRN. doi:10.2139/ssrn.2205145. Hutchins E (1995) Cognition in The Wild. Cambridge, MIT Press. Jain S, (1995) Virtual factory framework: a key enabler for agile manufacturing. In: Emerging Technologies and Factory Automation, 1995. ETFA’95, Proceedings. 1995 INRIA/IEEE Symposium on, Vol. 1, IEEE: pp. 247–258). Javed F, McNair M, Jacob F, Zhao, M (2014) Towards a job title classification system. In: Proceedings of the 1st Workshop on Web-Scale Classification: Classifying Big Data from the Web, New York City, February 2014. Keen P (1980) Adaptive design for decision support systems. Data Base 12(1): 15–25. Kreibich R (2015) Von big zu smart – zu sustainable? (available at: http://www.bpb.de/apuz/ 202244/von-big-zu-smart-zu-sustainable?p=all, accessed 10 Dec 2015). Kureková LM., Beblavy M, Thum AE. (2014) Using internet data to analyse the labour market: a methodological enquiry. IZA Discussion Papers 8555: 3–25. Lantra (2005) Northern Ireland Labor Market Intelligence. Lantra House, Warwickshire. Larsen C, Rand S (2015) Introduction. In: Big Data and the Complexity of Labour Market Policies: New Approaches in Regional and Local Labour Market Monitoring for Reducing Skills Mismatches, Larsen C, Rand S, Schmid A, Mezzanzanica M, Dusi S eds. Rainer Hampp Verlag, Munchen: p 11–30. Luce D (1959) Individual Choice Behaviour, Wesley, New York. Mackenzie A, Pidd M, Rooksby J, Sommerville, I, Warren I. and Westcombe M (2006) Wisdom, decision support and paradigms of decision making. European Journal of Operational Research, 170(1): 56–171. Nardi B A (1996) Studying context: a comparison of activity theory, situated action models, and distributed cognition. In: Context and consciousness: activity theory and human-computer interaction, edited by Nardi B, MIT Press. Oprean C, Kifor C V, Negulescu S C, Candea C, Oprean C. (2009) e-Collaborative Decisions – a DSS for academic environment. In: Proceedings of the, World Congress on Science, Engineering and Technology & International Conference on Computer, Electrical, and Systems Science, and Engineering: p. 173–179. Oprean C, Moisil I, Candea C (2002) eUniv: an e-business solution for a university academic environment. In: Proceedings of 3rd Global Congress on Engineering Education, Glasgow, UK: p. 363–366. Parunak H V D (2006) A survey of environments and mechanisms for human-human stigmergy, Lecture Notes on Artificial Intelligence, Springer, 3830: 163–186. 210 5 Applications

Parunak H V D, Brueckner S A (2009) The cognitive aptitude of swarming agents, available at: http://www.soartech.com/images/uploads/file/The_Cognitive_Aptitude_of_Swarming_Agents. pdf, accessed on 30.07.2016. Parunak V H D, Brueckner S A, Sauter J A, Matthews R (2005) Global convergence of local agent Behaviors, In: Proceedings of the 4th International Joint Conference on Autonomouse Agents and Multi. Agent Systems (AAMAS), Utrecht, Netherlands. Power D J, Phillips-Wren G (2011) Impact of social media and Web 2.0 on decision-making. Journal of decision systems, 20(3): 249–261. Precup R E, Hellendoorn H (2011) A survey on industrial applications of fuzzy control. Computers in Industry, 62(3): 213–226. Radu C, Cândea C, Cândea G., Zamfirescu CB (2014) Towards a cloud-based group decision support system, In: Recent Developments in Computational Collective Intelligence,A.Bădică et al. eds, Studies in Computational Intelligence 513, Springer: p. 187–196. Rosen D, Suthers D D (2011) Stigmergy and collaboration: tracing the contingencies of mediated interaction. In: Proceedings of the 44th Hawaiian Int. Conf. on System Sciences, IEEE Press: p. 1–10. Rourea D D, Goble C, Stevens R (2009) The design and realisation of the my Experiment virtual research environment for social sharing of workflows, Future Generation Computer Systems, 25: 561–567. Sacco M, Radaelli C, Cândea C, Georgescu V (2009). In 2009 IEEE International Technology Management Conference (ICE) IEEE: p. 1–8. Sacco M, Redaelli C, Constantinescu C, Lawson G, D’Cruz M, Pappas M (2007) Difac: digital factory for human oriented production system. In: Human-Computer Interaction. HCI Applications and Services, Part IV, HCII 2007, LNCS 4552 Springer, Berlin: p. 1140–1149. Schneider M, Deane KC. eds (2014) The University Next Door: What Is A Comprehensive University, Who Does It Educate, and Can It Survive? Teachers College Press, Amsterdam. Sebastiani F (2002). Machine learning in automated text categorization. ACM computing surveys (CSUR), 34(1): 1–47. Skitas N, Zimmermann KF (2015) The Internet as a data source for advancement in social sciences. IZA Discussion Papers 8899.(available at: ftp: ftp.iza.org/dp8899.pdf., accessed 10 Dec 2015. Susi T, Ziemke T (2001) Social cognition, artefacts, and stigmergy: a comparative analysis of theoretical frameworks for the understanding of artefact-mediated collaborative activity, Cognitive Systems Research, 2(4): 273–290. Teknomo K (2007) K-Means Clustering Tutorials (available at: http://people.revoledu.com/kardi/ tutorial/kMean, accessed 17.12.2015). Thuy P, Hansen E, Price D (2001) The Public Employment Service in A Changing Labour Market. International Labour Office, Geneva. Wilensky U (1999) NetLogo, Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston (available at: http://ccl.northwestern.edu/netlogo/ accessed on 30.07.2016). Woods JF., O’Leary CJ. (2006) Conceptual Framework for An Optimal Labour Market Information System: Final Report. Upjohn Institute Technical Report 07–022. doi:10.17848/tr07-022. Zamfirescu B C, Candea C, Radu, C (2014). A stigmergic approach for social interaction design in collaboration engineering. Neurocomputing, 146:151–163. Zamfirescu C B, Candea C (2011) Planning in collaborative stigmergic workspaces. In Computational Collective Intelligence. Technologies and Applications, Part II, edited by Jedrzejowicz P, Thanh Nguyen N, Hoang K, Lecture Notes in Computer Science, 6923, Springer: p. 160–169. Zamfirescu C B, Candea C (2012) A stigmergic guiding system to facilitate the group decision process. In: Proceedings of 28th IEEE International Conference on Data Engineering, IEEE Press, Washington: p. 98–102. References 211

Zamfirescu C B, Candea C, Luca M (2002). On integrating agents into GDSS. In: Large Scale Systems: Theory and Applications. Proceedings of the 9th IFAC / IFORS / IMACS / IFIP/ Symposium, Filip F G, Dumitrache I, Iliescu S S eds. Pergamon Press, Oxford: p. 433–438. Zamfirescu C B, Candea C (2013). On Interacting with collective knowledge of group facilitation. In: 5th International Conference on Computational Collective Intelligence Technologies and Applications – ICCCI 2013, LNAI 808: p. 255–265. Zamfirescu C B, Filip F G (2010) Swarming models for facilitating collaborative decisions, Int. J. of Computers, Communications & Control, 5(1): 125–137. Index

A Automation Agent, 48, 182 irony of, 11 based on evidence, 49 levels of, 12–15 based simulation, 113, 181, 187 scale of, 14–15 human, 10–12, 14, 15, 17, 18 intelligent, 48, 139 B Aggregating Behaviour individual preferences, 24, 58, 81, 84–89 chaotic, 10 Aggregation collaboration, 54 judgement, 93–96 collective, 178 mechanism, 81–83, 93 human, 181 rule, 83, 87–89, 91, 94–96, 113 knowledge-based, 13 Algorithm players, in collaborative work, 164 decision making, 12 pure rational, 182 k-means clustering, 189–191 rule-based, 13 a simple collaborative selection, 58 skill-based, 12 Alternative, 15, 20–21, 24, 32, 45–47, 51, 58, users, 181 59, 62, 80, 92, 97, 102, 108, 110, 112, 114, Big data, 122, 124–125 122, 128, 144, 150, 152, 182, 186 attributes, 125 Application definition, 124 control, 31, 40–41 Biometric/s, 151 marketplace, 141, 145 based recognition, 152–153 oriented specific DSS, 39, 40, 42, 63 false acceptance/rejection rate, 153 what-type (  ) knowledge, 49, 52 mobile and web, 158 Artificial intelligence, 31, 39, 44, 47 technology subclasses, 154, 158 based technologies, 47–49 Bounded rationality, 21, 47 Artificial neural network, 48, 50 Business intelligence and analytics, 3, 45, 50, As a service 125 business (BaAS), 198 evolution stages, 126 IT, 52, 53 platform (PaAS), 198–199 C software (SaAS), 51, 53, 56, 198 Case-based reasoning, 48 tool, 201 Change/changing, 4 Assessment agents, 52 rational, 108–109 implementation, 63 social, 108–109 in group goals, 182

© Springer International Publishing AG 2017 213 F.G. Filip et al., Computer-Supported Collaborative Decision-Making, Automation, Collaboration, & E-Services 4, DOI 10.1007/978-3-319-47221-8 214 Index

Change/changing (cont.) decentralized, 9 individual, 89 heterarchical, 9 local, 125 level, 12, 13 management project, 79 management and, schemes, 2 planned, 52 unit, 7, 8 Choice, 50, 106, 196 Cooperation, 5, 72, 73, 164 collective, 71 Coordination, 7, 131 making a, 20, 56, 61 mechanism, 7, 112, 178 phase, 150 stigmergic, 178, 180 Cloud computing, 23, 51, 93, 152, 200 Criterion/criteria, 20, 23–24, 57–59 defintion, 150 for resource allocation, 98 mobile models, 150 Crowdsourcing, 3, 79–81, 133 time, 206 Computer-supported cooperative work Computer mediated communication (CMC), (CSCW), 74, 77 74–75 Collaboration, 4, 73 D 2.0, 76 Data mining, 123, 128, 133, 167, 188 e- Data science, 125–127 history of, 74–76 citizen, 127 engineering, 54, 104 principles of, 128 group, 72 Decision, 19 IT-mediated, 3 making, 20 models deployment, 110, 113 multiparticipant, 22 patterns, 105 problem, 20, 32 Collaborative process model of, 32–33 decision-making, 81 room, 75, 152 mental mapping, 179, 180 situation, 20, 32 platform, 204 unit, 20, 32, 34 process, 108, 109 unit network, 5 process model, 196–198 selection algorithm, 58 Decision-maker thinking, 10 classes of, 34, 35 working environment, 105, 195 human, 34 basic attributes, 179–182 individual, 1 Collective intelligence (Wisdom of the crowd), integrated, 49 80, 130, 132, 178, 188 limits of, 34 Complexity multiparticipant, 22 cognitive, 110, 111, 114, 186 task assignment in, 49 communication, 92 Decision support system (DSS), 31, 36 computational, 90 architecture, 39–40 data set, 125 data-driven, 44, 122, 123 dynamic, 106 definition of, 36 Condorcet domain, 16 extensions, 85, 86 evolution stages, 45 paradox, 82 mixed knowledge/integrated, 46, 49 winner, 82 multiparticipant, 43 Control real-time, in control applications, 40, 41 applications, 40 subclasses, 42, 46 collaborative theory, 9 technology, 37–39 collaborative, theory principles, 9 tools, 200–204, 207 Index 215

Design designing and implementing an, 51–52, 81, strategies, 182–185 108 Disciple, 49 evaluating an, 61 Dispatcher, 55, 56 geographical, 38 orientation of the, 52 E management (MIS), 43 E-activity, 4, 9 Infrastructure Econological, 20, 41, 46 critical, 3 Electronic meeting systems (EMS), 77. I&CT, of target organization, 60 See also GSS mobile cloud, 150 Enterprise next wave, the, 124 extended, 2, 6 web, 129 virtual, 2, 6, 22, 55 wireless network, 141–142 Evaluation Integration criterion, 24 and evaluation, 60–61 Expert system, 39, 47, 50, 103, 111 multiperspective, 49 Intelligence F collected, 130 Facilitator, 110–111, 177, 182, 199 step, 32, 52 Fairness, 97–99, 101 Interconnections, 2 Internet of things, 3, 76, 126 G International standards organization (ISO), 4, Game 53, 134, 161 mechanics, 162 serious, 162 J software, 164 Judgment aggregation, 93–97 Group, 22 argumentation, 102–104 K attitude, 59 Knowledge collaborative, 113 based system, 47 composition, 187 driven DSS, 45 decision, 19, 34, 46 mixed, systems, 46 decision, 34. See also decision unit and subsystem, 39 multiparticipant decision-maker what/how-type, 49, 52 of peers, 35 worker, 1, 19, 36 thinking, 16 work, 16 L Group support system (GSS), 74–76 Life cycle, 53 characteristic features, 78 assessment, 4 software and systems, 76 consideration, 4 product, 58 H system development, 51 Heuristics, 21 Location-based service (LBS), 148 Hierarchy, 7 multi-echelon, 7 M Holon/ holarchy, 9 MABA-MABA, 13 Human support system, 35, 36 Meeting, 16, 204 e-meeting, 180, 183 I electronic, systems (EMS), 74, 77, 151. iDecision support (iDS), 195–196 See also GSS Implicite favourite, 21, 47 traditional/face-to-face, 16, 78 Impossibility theorem of arrow, 83 virtual, 152, 154 Information system Mobile anthropocentric, 36 application, 145–147 216 Index

Mobile (cont.) STV (Hare), 86 devices, 144–145 voting, 87 networks, 143–144 usage subclasses, 147 S Multicriteria/multiattribute decision models Smart factory, 3 (MCDM/MADM), 57 Social assessment, 108 analysis, 18 Social choice theory, 80–81 models, 22 Social media, 135, 140 Social network, 135 N subclasses, 137 Negotiation, 46, 108, 148 Software as a service, 53, 56 support system, 46 Systems of systems (SoS), 3 Network Standards, 3, 53, 128, 134, 140, 161 classification of, 6 Stigmergy, 178 collaborative, 5 System goal-oriented, 5 anthropocentric, 17–19 long-term strategic, 5 collaborative, 54, 133–134, 138–139, 157 manifestation forms of, 5 hierarchical multilevel, 7 multi-echelon, 7 P Participant, 24, 31, 35, 44–46, 51, 54, 71–75, T 79, 81, 86, 90, 92, 98, 102–104, 107, 136, ThinkLet, 105, 179, 181 140, 150–151 Tool Performance collaborative decision-making (CDM), 104, indicators, 20, 21, 106, 206 131 Prosumer, 3 collaborative argumentation, 103–104 Prototype, 55 dedicated, 102 evolving, 55 facilitation, 112 principles of, 54 marketplace, 201 throwaway, 55 software collaboration, 104, 105 specialized, 102 R Recommender system, 3, 147, 151 V Resource Voting allocation, 97–101 mechanisms, 83–87 Rule system, 81, 85, 87 aggregation, 24 Borda, 84 W Bucklins, 87 Wearable devices, 141, 144, 158 conclusion-based, 95 Web (World Wide Web) Copeland, 85 based group DSS, 132 distance-based, 96 definition, 129 min-max, 86 generations, 130–131 plurality, 84 semantic, 130, 133 premise-based, 95 social, 130 quota, 96 symbiotic, 131 scoring, 84–85 Wireless networks, 141–143 simple majority, 81 Work system theory, 54