<<

FACULDADEDE ENGENHARIADA UNIVERSIDADEDO PORTO

Redesign and Gamification of a Social-Intensive Software Learning Environment

Pedro Manuel Monteiro Albano

Mestrado Integrado em Engenharia Informática e Computação

Supervisor: Nuno Honório Rodrigues Flores

February 1, 2020

Redesign and Gamification of a Social-Intensive Software Learning Environment

Pedro Manuel Monteiro Albano

Mestrado Integrado em Engenharia Informática e Computação

February 1, 2020

Abstract

The use of frameworks has become an important factor in improving the large-scale reuse of software and ensuring quality standards are maintained. As the world demands for more and more computer based solutions and applications, software developers are faced with an increasing amount of different frameworks available and the need to effectively learn how to properly use them in their professional lives to solve problems. Frameworks are often complex and good doc- umentation helps reducing the learning curve for professionals to apply them, but is not always available or adequate to one’s learning needs. Previous work resulted in the creation of DRIVER, a social intensive learning platform mod- eled around the concept of a Collective Knowledge System. There, learners contribute to the com- munity’s global knowledge on a framework by sharing how to reach specific goals by the means of publishing learning paths, a set of sections and pages visited in order to reach a milestone, described by the use of tags. Despite providing developers with a set of tools to enhance the learning aspect of working with a framework, DRIVER lacks adherence due to general problems with its usability and low incentive for users to make use of the tools advantages, effectively creating an additional workload on the task of learning. In order to make the platform more appealing, the use of Gamification was considered. Gam- ification is shortly described as adding game-based mechanics to activities or tools, such as char- acters, points or rewards, in order to foster the user’s motivation to engage with them, and subse- quently their productivity. The objective was to create a new version of DRIVER integrating these concepts and observe the impact they had on the developers’ learning process, but not without attempting to solve the inherent user experience problems the tool had. This resulted in the creation of a whole new piece of software that redesigned the original features present in DRIVER to perceivable simpler and more intuitive ones, while implementing a customizable avatar and experience point system built around unlocking rewards for making use of the platform’s features. Following a short study evaluating two deployments of the updated learning environment, one with Gamification related features enabled and one without them, there was no significant difference observed in the performance of the participating subjects, however, small traces indicate the Gamification features employed, appealed the interest of some of the users and might produce the intended benefits in the long term.

Keywords: gamification, collective knowledge systems, documentation, frameworks, learning, software, DRIVER

i Resumo

A utilização de frameworks tornou-se num fator importante no que toca à melhoria da reutilização software escalável e de forma a garantir que os padrões de qualidade são mantidos. Ao mesmo tempo que a procura no mercado por soluções informáticas aumenta, a necessidade que os de- senvolvedores de software têm em aprender a utilizar o número em constante crescimento de frameworks disponíveis na sua vida profissional aumenta igualmente. Estas frameworks são por muitas vezes complexas, mas a sua boa documentação ajuda a reduzir a curva de aprendizagem dos profissionais que as tem de aprender a utilizar, contudo nem sempre está disponível ou não é adequada às necessidades de todos. Trabalho anterior, resultou na criação do DRIVER, uma plataforma de aprendizagem social- mente intensiva, modelada em volta do conceito de um “Sistema de Conhecimento Coletivo”. Nesta ferramenta, os seus aprendizes contribuem para o conhecimento global da comunidade so- bre uma framework, através da partilha de como chegar a objetivos específicos, pela publicação de “caminhos de aprendizagem”, que se definem por um conjunto de secções e páginas visitadas com o objetivo de chegar a uma determinada meta, descrevendo-se pelo uso de etiquetas. Apesar de fornecer aos desenvolvedores de software um conjunto de ferramentas focado em melhorar o aspeto da aprendizagem relacionado com o trabalho de uma framework, o DRIVER é pouco utilizado devido a problemas gerais na sua usabilidade e ao facto de não conferir grande in- centivo aos utilizadores em fazer uso vantajoso das suas funcionalidades, que efetivamente provo- cam uma carga de trabalho adicional na tarefa de aprender. De forma a tornar a plataforma mais apelativa, o uso de Gamificação foi considerado. De- screvendo de forma resumida, Gamificação é o ato de adicionar mecânicas de jogo a atividades ou ferramentas, tais como personagens, pontos ou recompensas, de formar a criar motivação para que os utilizadores interajam com elas, e consequentemente aumentar a sua produtividade. O objetivo era criar uma nova versão do DRIVER que integra-se estes conceitos e observar que impacto que eles tinham no processo de aprendizagem dos programadores, mas não sem a tentativa de resolver os problemas implícitos na experiencia de utilizador que a ferramenta tinha. Isto resultou na criação de uma nova peça de software que redesenhou as funcionalidades originais existentes do DRIVER para serem mais simples e intuitivas, ao mesmo tempo que se implemen- tou um avatar personalizável com um sistema de pontos de experiência contruído em volta do desbloqueio de recompensas pelo uso das funcionalidades da plataforma. No seguimento de um pequeno estudo que avaliou duas implementações da versão atualizada da plataforma de aprendizagem, uma com as funcionalidades relacionadas com a gamificação ati- vadas e outra sem elas, não se constatou diferenças significativas no desempenho dos participantes, contudo, há pequenas indicações que a funcionalidades de gamificação aplicadas, apelaram ao in- teresse de alguns dos utilizadores e podem vir a produzir os benefícios desejados a longo prazo.

iii Palavras-chave: gamificação, sistemas de conhecimento coletivo, documentação, frameworks, aprendizagem, software, DRIVER

iv Acknowledgements

I’d like to begin by showing a much deserved gratitude to Cecília Monteiro and Francisco Albano, my mother and father. Their outstanding support across my entire life is the reason why i have the privilege of being one of the many that partake in works like this, across many, many other things. A special thanks goes to my supervisor and this dissertation’s proponent, Nuno Flores, who has always helped me with everything necessary to develop this project smoothly and gave me the freedom to add a little bit of “me” into it. To my friends, i will be ever so thankful for their support and programming tips, with a special attention to the constant reminders there’s no such thing as “too much time” to do a dissertation work like this. Last and not least, i would like to thank all of the developers and contributors for the free software libraries and frameworks they make available all around the web, that the products of this dissertation were built of. Works like this wouldn’t be possible without them.

Pedro Manuel Monteiro Albano

v vi Contents

1 Introduction1 1.1 Context ...... 1 1.2 Motivation and Goals ...... 1 1.3 This document’s structure ...... 2

2 Framework Learning3 2.1 Introduction ...... 3 2.2 Documentation Pages ...... 4 2.2.1 Written in Wiki Software ...... 4 2.2.2 Written in Custom Web Pages ...... 5 2.2.3 Generated from Source Code and Annotations ...... 6 2.3 Documentation Design Practices ...... 6 2.3.1 Design Patterns and Pattern Languages ...... 7 2.3.2 Demonstrations of Usage and Functionality ...... 8 2.4 Summary ...... 10

3 DRIVER 11 3.1 Introduction ...... 11 3.2 Learning Paths ...... 12 3.2.1 How are Learning Paths produced? ...... 12 3.2.2 Why are Learning Paths important? ...... 13 3.3 Usability Problems ...... 15 3.4 Summary ...... 16

4 Gamification 17 4.1 Introduction ...... 17 4.2 Examples of Gamification ...... 18 4.3 Gamification for Learning ...... 20 4.4 Summary ...... 21

5 Planning, Methodology and Goals 23 5.1 Introduction ...... 23 5.2 Development Methodology ...... 23 5.3 Redesign Plans and Goals ...... 25 5.4 Gamification Plans and Goals ...... 26 5.5 Expected Results : Gamification vs Redesign only ...... 28 5.6 Summary ...... 29

vii CONTENTS

6 DRIVER 2.0 : The New Version 31 6.1 Introduction ...... 31 6.1.1 About the figures in this chapter ...... 32 6.2 Architecture and Technologies ...... 33 6.2.1 Server Architecture and Technologies ...... 33 6.2.2 Client Architecture and Technologies ...... 34 6.3 General Features ...... 35 6.3.1 Documentation Pages ...... 35 6.3.2 Recommendation, Searching and Tag Filters ...... 38 6.3.3 Path Menu and Editor ...... 41 6.3.4 Path Wallet and Sharing ...... 44 6.3.5 Tutorial ...... 46 6.4 Gamification Features ...... 46 6.4.1 User Avatar and Experience Points ...... 47 6.4.2 Daily Activities and Weekly Challenge ...... 49 6.4.3 Leaderboard ...... 51 6.5 DRIVER 2.0 with Gamification disabled ...... 52 6.6 A short note about Cows ...... 53 6.7 Summary ...... 54

7 Validation Experiment 55 7.1 Introduction ...... 55 7.2 This Experiment vs The Ideal One ...... 56 7.3 Experiment Details ...... 56 7.3.1 Test Procedure ...... 56 7.3.2 Test Subjects and Grouping ...... 58 7.3.3 Chosen Framework ...... 59 7.3.4 Test Environment ...... 59 7.3.5 Pre-experiment Questionnaire ...... 60 7.3.6 Tasks to be solved using the framework ...... 60 7.3.7 Post-experiment Questionnaire ...... 60 7.4 Results Analysis ...... 61 7.4.1 Statistical Relevance ...... 61 7.4.2 Background ...... 62 7.4.3 External Factors ...... 64 7.4.4 Overall Satisfaction ...... 65 7.4.5 Development Process ...... 67 7.4.6 About the Framework ...... 70 7.4.7 About the Gamification Features ...... 72 7.5 Threats to Validation ...... 74 7.6 Summary ...... 75

8 Conclusions and Future Work 77 8.1 Conclusions ...... 77 8.2 Future Work ...... 78

References 79

A Pre-experiment Questionnaire 81

viii CONTENTS

B Tasks to be solved using the framework 83

C Post-experiment Questionnaire 87

D Post-experiment Questionnaire (w/ Gamification) 91

E Table of Questionnaire acquired results 95

ix CONTENTS

x List of Figures

2.1 Overview of the framework learning process ...... 4 2.2 Screenshot of the DokuWiki installation guide running on the software itself . . .5 2.3 Screenshot of the React docs “Getting Started” page ...... 6 2.4 Screenshot of a documentation page produced by Doxygen ...... 7 2.5 An example of code usage as seen in the React framework’s tutorial ...... 8 2.6 An Interactive Snippet on Mozilla’s MDN web docs ...... 9

3.1 Screenshot of a page written in the DRIVER platform ...... 12 3.2 Overview of a Q&A type Collective Knowledge System ...... 13 3.3 Overview of the Learning Path creation process in the original DRIVER . . . . . 14 3.4 Widget for searching Learning Paths in the DRIVER tool ...... 15

4.1 Screenshot of the Foldit game ...... 18 4.2 Screenshot of one of the Dragonbox Math Apps ...... 19

5.1 Gantt chart of the dissertation’s development schedule by weeks ...... 24

6.1 Overview screenshot of the DRIVER 2.0 platform ...... 32 6.2 General system architecture of the DRIVER 2.0 platform ...... 33 6.3 Core interface elements of the DRIVER 2.0 platform ...... 35 6.4 A documentation page within DRIVER 2.0 ...... 36 6.5 A page section within DRIVER 2.0 ...... 36 6.6 Comparison between a normal and an highlighted section ...... 37 6.7 DRIVER 2.0’s page editor ...... 37 6.8 DRIVER 2.0’s recommendation widget ...... 38 6.9 DRIVER 2.0’s contextual recommendation feature ...... 39 6.10 Recommendation widget being influenced by the user’s selected tags ...... 39 6.11 Matching vs non-matching section while context tags are active ...... 40 6.12 DRIVER 2.0’s search tool ...... 40 6.13 DRIVER 2.0’s My Path Menu ...... 41 6.14 Manually adding a step to Learning Path from a documentation section ...... 41 6.15 Visual anatomy of a Learning Path step ...... 42 6.16 DRIVER 2.0’s Path Editor ...... 42 6.17 DRIVER 2.0’s Path Wallet showing a list of the user’s creations ...... 44 6.18 Description of a list item from the user’s Path Wallet ...... 45 6.19 A Learning Path’s sharing page within DRIVER 2.0 ...... 45 6.20 DRIVER 2.0’s Tutorial Page ...... 46 6.21 DRIVER 2.0 sidebar’s user profile widget ...... 47 6.22 Examples of possible user avatar customizations ...... 47

xi LIST OF FIGURES

6.23 User’s profile page showcasing the customization options for the avatar . . . . . 48 6.24 User’s daily objectives in DRIVER 2.0 ...... 49 6.25 DRIVER 2.0’s Weekly Challenge page ...... 50 6.26 The leaderboard as seen in DRIVER 2.0 ...... 51 6.27 DRIVER 2.0 with Gamification features disabled ...... 52 6.28 Images part of an introductory video to the original DRIVER tool ...... 53

7.1 Overview of the validation experiment protocol ...... 57

xii List of Tables

7.1 Statistics of the “Background” part of the pre-experiment questionnaire . . . . . 62 7.2 Statistics of the “External Factors” part of the post-experiment questionnaire . . . 64 7.3 Statistics of the “Overall Satisfaction” part of the post-experiment questionnaire . 66 7.4 Statistics of the “Development Process” part of the post-experiment questionnaire 68 7.5 Progress statistics of the tasks solved in the validation experiment ...... 69 7.6 Statistics of the “About the Framework” part of the post-experiment questionnaire 70 7.7 Distances from the correct answers in the “About the Framework” questions . . . 71 7.8 Statistics of the “Gamification Features” part of the post-experiment questionnaire 72

xiii LIST OF TABLES

xiv Abbreviations

Admin(s) Short for “Administrator(s)” API Application Programming Interface CKS Collective Knowledge System CSS Cascading Style Sheets FEUP Faculdade de Engenharia da Universidade do Porto HTML HyperText Markup Language HTTP HyperText Transfer Protocol IDE Integrated Development Environment IRC Relay Chat JS JavaScript JSON JavaScript Object Notation MIEIC Mestrado Integrado em Engenharia Informática e Computação MMORPG Massively Multiplayer Online Role Playing Game MPA Multi Page Application Q&A Question and Answer SPA Single Page Application UI User Interface URL Uniform Resource Locator WOW World of Warcraft (Video Game) WWW World Wide Web

xv

Chapter 1

Introduction

This chapter introduces this dissertation’s context and motivation for work, discussing in a brief introduction its importance and the problems it looks to address, finishing by describing the overall document structure and the topics discussed in the following chapters.

1.1 Context

DRIVER[FA16] is a platform that was created following research into how to improve a devel- oper’s task of learning how to use a framework. Its a tool for housing documentation that supports sets of known design elements and techniques needed to produce framework learning artifacts, while adding on top its own features to make the task of learning frameworks an easier and more productive one. This dissertation project comes from University of Porto’s Faculty of Engineering (In Por- tuguese: “Faculdade de Engenharia da Universidade do Porto”) as part of the Integrated Master’s in Informatics Engineering and Computing (In Portuguese: “Mestrado Integrado em Engenharia Informática e Computação”) and continues the platform’s development process into evolving into a tool that can better help learning professionals with their task of mastering frameworks.

1.2 Motivation and Goals

Frameworks have become an important asset in producing software, fostering its scalability, re- usability and high quality standards. The use of these software libraries has become a substantially large necessity in the task of producing applications due to the overall benefits they provide to the development process. As such, as more and more frameworks appear and their level of sophistication increases, it has become part of a developers work to learn and master their use as effectively and quick as possible. Good documentation helps a lot with the task of learning how to use a framework, but

1 Introduction is not always available, as it often may be hard to navigate for less experienced users and may not be adequate to the needs of different learning individuals. DRIVER plays an important role in enabling learners to acquire implicit knowledge within documentations through it’s social component, improving the ability to learn from a set of docu- mentation pages and the accessibility to its knowledge. However, DRIVER sees low adherence and usage, due to problems in its usability and an overall lack of incentive to make use of its tools. As it is, it creates an additional workload to generate benefit from using the platform, unappealing to those tasked with the effort of learning how to use a framework. Gamification is a concept that revolves around adding game mechanics to activities in order to improve interest in them, aiming an increase of the overall productivity of such activity and the satisfaction of the people performing it. Gamification has a successful record in achieving these objectives and has sometimes produced results above the expectation. Given DRIVER’s issues, the goal of this project was to progress DRIVER’s development into a new iteration of the platform. This new iteration focuses on the use of gamification elements to improve the appeal in using it, so that users can better take advantage of its unique features, expecting to see an increase in their overall productivity and satisfaction. Parallel to this, due to the previous iteration’s known usability issues and the need to create a structure to implement the new gamification features, a ground up redesign of the platform was performed aiming to rework the user interface and its features into a simpler and more intuitive experience.

1.3 This document’s structure

In addition to this introduction, this document contains 6 additional chapters:

• Chapter2 discusses the state of the art in terms of framework learning and existing solutions.

• Chapter3 provides a more in-depth look at the original DRIVER platform and its function- ality, as well as discussing its issues.

• Chapter4 introduces the concept of gamification and discusses the state of the art in terms of gamification projects for learning.

• Chapter5 takes a look at the planning and methodology followed in this dissertation work, the goals and plans behind the upgrade process of DRIVER and its expected results.

• Chapter6 approaches the redesign and gamification efforts of the new DRIVER iteration, taking an in-depth look at how the new version of the platform works, its capabilities and the motivation behind the new design’s choices.

• Chapter7 describes the validation experiment performed in order to determine the impact of adding gamification features to DRIVER and evaluates its results.

• Chapter8 looks into the conclusions taken after realizing the dissertation project and future work.

2 Chapter 2

Framework Learning

This chapter describes the state of the art regarding framework learning. It presents the overall process on how developers acquire knowledge on how to make use of frameworks, the existing solutions to host online documentation pages and design practices used to enrich their learning experience. It should be noted that the majority of the topics described in this chapter come from the original DRIVER research work[FA16], which takes a more in-depth look at them.

2.1 Introduction

DRIVER inserts itself in a group of solutions fostering the learning process of developers in how to use frameworks. Every developer learns in different ways, however, the methodology to do so usually takes foundation in looking at generic places based on an order of relevance, reliability and availability. An overview of this process is described in Figure 2.1. Developers start with examining the documentation pages (or wiki) of the framework. These are written by the authors, who can have the most in-depth understanding of the software’s capabil- ities, as such, this is usually the desirable place to look at when learning how to use the software as, in general, they’re accompanied with guidance on how beginners can make use of these resources and a more experienced audience can locate details regarding more advanced functionality. How- ever, this often does not work as planned. In reality documentation is often structured in confusing ways that may not allow learners to easily access the knowledge contained in them, it may con- tain insufficient detail on how the software works or on how to use it for determinate goals. An example would be the “Getting Started” section of the documentation having all the needed detail in order to setup the usage of the framework, but the rest being as bare bones as only including method headers and describing their parameters. As documentation fails to satisfy the learner’s needs, they throw their attention at the next source of knowledge. This can be as straightforward as querying in web search engines like Google[Goob] or looking in Q&A forums dedicated to programming such as Stack Overflow[Sta].

3 Framework Learning

Figure 2.1: Overview of the framework learning process

Nowadays, internet access is highly widespread and there’s a very large amount of individuals contributing answers in these places making a developer’s task an easier one, but naturally these answers might not end up being adequate or correct so it’s up to the developers judgment to review their suitability, for example, in matters like respecting security standards. So afterwards developers end up calling for the aid of specialists, individuals more fluent in the usage of the framework of interest, whom might not be available or not have the time to share their knowledge on the matter. This last crowd has the particularity of having already gone through the process of learning how to use the software, which in prior to their grading as a “specialist” in the area, the only reliable source of information would have been the documentation, thus having learned from it. This circle points to the high importance of a framework’s documentation being a highly re- liable source of knowledge in its usage, so this chapter takes a look at the existing ways to host documentation online and the design practices to apply in them to increase their overall quality.

2.2 Documentation Pages

Documentation pages come in a few different flavors but ultimately share the same goal of explain- ing how to use a framework (or anything else when looking outside of this project’s scope). These could be classified in 2 groups. The first group would be documentation pages entirely written by the authors, in either supporting software (like DRIVER) or made from scratch solutions. The other group would acquaint documentation pages generated from the software’s source code.

2.2.1 Written in Wiki Software

The structure of a Wiki and a Documentation artifact are rather similar, in truth, you could say they end up being equivalent to a certain degree. Wiki software gets along well with the needs of

4 Framework Learning

Figure 2.2: Screenshot of the DokuWiki installation guide running on the software itself

writing documentation. It enables the creation of a well organized structure that can be browsed and searched conveniently by its users when looking for information. Software like DokuWiki[GD], as seen in Figure 2.2, supports standard rich text formatting, embedding media such as images and videos and includes the ability to write snippets of code, potentially using relevant to language code highlighting, ideal for displaying contents targeting the learning of a framework. Writing documentation in this sort of solution is advantageous to the learners because when several documentations use the same software the learner can skip the process of getting used to the environment surrounding it.

2.2.2 Written in Custom Web Pages

Its often the case where documentation pages are written in their own custom solutions. The reasons for this can be diverse. Aesthetics, including features incompatible with existing Wiki software or even just preference. Sometimes this can be done to demonstrate frameworks in ac- tion, like the case of many web oriented ones such as the React[Facb] documentation, as seen in Figure 2.3. The final product in this scenario doesn’t usually diverge much from the sort of Wiki structure from the previous section, but is obviously more prone to lacking features fostering the learner’s tasks and loses the advantage of following a sort of standard environment for learning.

5 Framework Learning

Figure 2.3: Screenshot of the React docs “Getting Started” page

2.2.3 Generated from Source Code and Annotations

Another sort of documentation are the ones generated from source code and annotations. These are usually produced by annexing comments, following a format, in the source components of software and later transformed into sets of pages by the means of a tool analyzing the code. An example of this kind of documentation are the ones generated using Doxygen[vH]. Figure 2.4 showcases an API documentation page produced by Doxygen for an implementation of a piece of software called D-Bus[Thea]. This sort of documentation has the advantage of covering more in depth the internal function- ality of frameworks, however, it isn’t ideal for learning practices as it’s more targeted towards working as a reference material. It should be noted that more experienced users of a framework may favor this sort of documentation as they are usually already acquainted with the overall re- quirements of the technology.

2.3 Documentation Design Practices

An important part of writing documentation is the usage of elements that can better transmit the knowledge implicit within. These design practices aim to make the contents of the documentation clearer and easier to understand to its learners, making concepts easier to locate, guiding their users into adopting the appropriate means to use a framework and possibly avoiding streaks of intensive trial and error to figure out how to produce given functionality.

6 Framework Learning

Figure 2.4: Screenshot of a documentation page produced by Doxygen

Next are some of the deemed most important ones to producing documentation. It should be noted not all of them are covered here, other kinds of practices exist, but overall all work towards the same goal.

2.3.1 Design Patterns and Pattern Languages

Design patterns and pattern languages are what’s best described as creating a set of guidelines and standards in terms of structuring and naming source code elements, the latter is often recognized as using naming conventions. A short example would be a standard practice of naming constant values using only uppercase characters in parts of software code. The goal here is simple. With the documentation following said patterns and the user possibly adopting them in their works, it becomes intuitive to identify the types of elements in code by quickly observing their naming and location. A function will have a distinguishing name when compared to a variable and both will usually be grouped in different locations. The code block shown in Figure 2.5 follows some of these standards. The first character of the name for the class type element used in that example is written in uppercase in contrast to other types. This inherently makes it more intuitive for developers to identify the intended functionality across different components sharing the same structure and behavior. It should be noted that like in the example shown, code editors and syntax highlighters fur- ther improve the user’s experience by visually enhancing the code’s display (by applying different coloring or other styling to it). This is usually performed by combining the used programming lan- guage’s syntax with the implicit knowledge in these patterns, so adopting them not only standard- izes the user’s perception within the documentation or their code, but also improves the behavior of the software handling it.

7 Framework Learning

Figure 2.5: An example of code usage as seen in the React framework’s tutorial

2.3.2 Demonstrations of Usage and Functionality

Perhaps more important than explaining functionality is the act of demonstrating how to use it and what it produces. This can range from showing how a tiny bit of the framework works, to a collection of steps needed to assemble a larger feature. Some of the practices having this as their objective are the ones such as:

Examples

Examples are an effective way to demonstrate how to make use of the framework, generating sam- ples that can illustrate core aspects of the framework’s usage that learners can easily understand. Figure 2.5 holds an example showcasing how to write a simple component using the React framework[Facb], as part of its introductory tutorial. This is a structure that is at the heart of its functionality, but could otherwise be troublesome to understand how to create for newcomers if not described using an example.

Interactive Snippets

Interactive Snippets are sort of a variant of examples. They have the same benefits as them but offer the learners a sandbox to test functionality on the spot, avoiding the need to go away from the documentation pages to do a much needed process of trial and error in figuring out how to make use of the framework’s features.

8 Framework Learning

Figure 2.6: An Interactive Snippet on Mozilla’s MDN web docs

Figure 2.6 demonstrates a running Interactive Snippet on Mozilla’s MDN web docs[MI] dis- playing a usage scenario of a Javascript method the user can manipulate to see the results change on the spot. The caveat hindering the usage of these snippets in place of static examples is that they can’t always be made available due to the nature of how programming languages work with compiling written code into computer instructions. The implementation of such feature is most of the time not trivial. Its sometimes available for languages that are interpreted in real time and can run within the documentation software’s environment, such as the case of web ones like Javascript, but its a very rare occurrence when a dedicated solution is required to do it or the language requires pre-compilation in order to produce executable code.

Tutorials and Cookbooks

Tutorials and Cookbooks, such as the one previously mentioned in Figure 2.3, play an important part in the teaching process of using a framework, as they provide step by step guides into how to reach certain goals and can incorporate the other design practices described in this section. They’re highly effective in helping newcomers adopt a new framework or guiding in the usage of more complex features of the software. However, this sort of artifact can’t be overused in the creation of documentations, or they can possibly slow down the process of locating smaller bits of the functionality that is covered within. Most notably, this type of content can prove unfavourable

9 Framework Learning when trying to convey knowledge to more experienced users, as they usually prefer quicker chunks of information.

2.4 Summary

This chapter shows how learners behave in terms of seeking knowledge to use a framework, what exists in terms of resources for writing documentation outside of DRIVER and how learners can make use of them. It can be concluded that there are several means guiding the production of good documentation, however there really aren’t many that can be used to grow knowledge outside of what’s written within.

10 Chapter 3

DRIVER

This chapter takes a look into the original DRIVER tool, the foundation of it, its functionality and how it tries to complement existing efforts by adding a social intensive component to the task of learning how to use frameworks.

3.1 Introduction

DRIVER[FA16], Figure 3.1, was developed as a system targeting the improvement of the devel- oper’s task of learning how to use frameworks. The platform was built on top of the DokuWiki software[GD] as a plugin extending its functionality. The goal was to create a system providing authors and users the tools to create documentation pages suitable for framework learning, en- abling the design practices referred in Chapter2, while adding on top functionality promoting its usage as a Collective Knowledge System. A Collective Knowledge System, or CKS, as conceptualized by Tom Gruber[Gru08], is a social environment where users store information and make it accessible to others through tech- nological means, enabling the community to search and review it, promoting an increase of its users overall understanding of the topics within. The system itself should offer the possibility of producing new knowledge through means of inference like recommendation engines. In short, a CKS is an environment where information can be stored and both the system and its users can manipulate it to generate knowledge. Figure 3.2 showcases an hypothetical Q&A system Gruber[Gru08] named the “The FAQ-o- Sphere”, as an example to describe the behavior of Collective Knowledge Systems. A Q&A forum like Stack Overflow [Sta] can be seen as a CKS, since:

• It allows users to propose questions (Store knowledge)

• It allows others to answer them or discuss them (Store and review knowledge)

• Answers can be voted correct or incorrect (Review knowledge)

11 DRIVER

Figure 3.1: Screenshot of a page written in the DRIVER platform

• Anyone has access to the discussion to acquire information from it (Search knowledge)

• The service can even recommend at which discussions to look at next (Generate knowledge)

Looking at Gruber’s description of “The FAQ-o-Sphere”, you can see that the reality on how knowledge flows within Stack Overflow isn’t far away from it and that it follows the concept of a Collective Knowledge System. So the idea for DRIVER, was to include a mechanism that allowed users to produce knowl- edge on where to locate information within the documentation and make it available for others to browse, quickening the process of finding the right pages and sections needed to achieve given goals and preserving this knowledge for later usage. This was accomplished by creating the Learn- ing Paths feature.

3.2 Learning Paths

Learning Paths are best described as the ordered set of sections and pages visited by a user in order to acquire the knowledge needed to implement certain functionality. They work analogous to what a click stream through a is or even a browsing history. Learning Paths are labelled by tags to identify their purpose and users can rate them to evaluate their accuracy and/or reliability.

3.2.1 How are Learning Paths produced?

In DRIVER, to produce a Learning Path, a user begins by clicking an option to tell the system to track their browsing through the documentation. While searching through the pages, users have

12 DRIVER

Figure 3.2: Overview of a Q&A type Collective Knowledge System

the ability to create bookmarks to highlight pages and sections they consider important for the task they are trying to accomplish. The process ends with triggering an option to stop the tracking, sequentially providing a set of tools to prune unnecessary steps from the learning path, attribute tags to it and share/store it in the system. An overview of the process can be seen in Figure 3.3 Following this, the Learning Path becomes available to all users to browse and rate. Figure 3.4 showcases the feature allowing users to search for the paths stored in the platform.

3.2.2 Why are Learning Paths important?

Learning Paths create a series of benefits.

Easy way to share a rich set of knowledge with others

Sharing a Learning Path is different than telling someone to look for a certain topic or providing them with a link to a web page. Learning Paths can work as small wrappers for a large amount of information, relieving the learner of the task of having to look for answers across the entirety of the documentation by providing everything in a single package. By using a Learning Path, learners receive a guide to the most relevant sections and the order they need to visit them, effectively producing a sort of lightweight tutorial or cookbook.

Preserving one’s knowledge for later

Its important to understand that Learning Paths don’t provide benefits just for others. In creating a Learning Path, the user preserves its work and knowledge of browsing through the documentation to reach a goal for later. The job of a Developer highly relies being accompanied by a reference

13 DRIVER

Figure 3.3: Overview of the Learning Path creation process in the original DRIVER

set. Often is the situation when one has to redo an implementation task done previously, but no longer recalls the exact process of doing so. Having the Learning Path stored creates a way to quickly recover that knowledge.

Creating a Recommendation Engine

It was noted earlier in this chapter that Learning Paths work in resemblance with click streams as they work as ordered sets of pages/sections. This information can naturally translate in the creation of recommendation processes within the system, as is the case with many . A good example is an online shop recommending what products to include in your shopping cart. DRIVER includes a recommendation feature telling users what pages to follow next with foundation on the path data stored within the platform, allowing the system to infer and make

14 DRIVER

Figure 3.4: Widget for searching Learning Paths in the DRIVER tool

available new knowledge to its users, one of the desirable characteristics of Collective Knowledge Systems.

3.3 Usability Problems

Despite DRIVER providing an interesting aid in learning how to use frameworks through its Learning Path features, the tool isn’t regarded as appealing as it could be due to some issues within its user experience and overall interface. Creating Learning Paths and sharing them with others is a rather manual process. It involves spending extra time and attention to produce them properly. Adding tags and pruning steps from the path at the end of its creation is specially a reasonably lengthy task, with the feature behind searching paths within the system being particularly affected by it, since it relies on Learning Paths being appropriately tagged to produce good results. The process can also suffer from some unreliability assuming the user forgets to prompt the system to either start or finish tracking the path at appropriate times, leading to it recording some potentially irrelevant or hard to refine Learning Paths, dropping the user’s interest in creating them. The overall interface of the system doesn’t put much emphasis on its additional features, there- fore it becomes somewhat easy for a user to miss them or disregard them through the platform’s usage. In fact, even activating these features within the environment is sort of subject to additional navigation that might prove undesirable to some, as they hide under expandable controls. These problems are identified mostly through interacting directly with DRIVER and experimenting with its features.

15 DRIVER

In Nuno Flores’ PhD thesis[Flo12], as part of its validation process, there’s statistics that compare the overall users satisfaction over an experiment that split groups into one that used documentation with the patterns introduced in Chapter2 and another that used DRIVER. While not deemed a significant difference, the group using DRIVER displayed satisfaction levels slightly below the other one. It’s a possibility that this difference could have been caused by the previously identified issues. As years have passed since the creation of the current DRIVER version, so has progressed the development of current web technologies and browsers. While DRIVER’s interface elements could potentially use an update to match modern web application’s design components, stream- lining it with the users expectations of it, one of the issues at this time is that some of its features are no longer supporting of current web browser’s standards, most notably the previewing one, as it relies on HTML iframes to display content, which have seen their use hindered due to security concerns among the web.

3.4 Summary

DRIVER works as a complement to the technologies and practices discussed in Chapter2. It at- tempts to boost the framework learning process by introducing a social element to the environment in order to help users acquire knowledge more efficiently. DRIVER’s Learning Paths create the means to store and use the information produced by the work of browsing documentation seeking to reach a goal, increasing the overall knowledge of the community of users of a framework. However, there are usability issues that may be causing users’ low adherence to the platform, toning down the potential the tool could have in helping developers master the knowledge within.

16 Chapter 4

Gamification

This chapter takes a look at what Gamification is and where it comes from, showcases some successful Gamification projects and talks about some of the elements of Gamification targeting learning activities.

4.1 Introduction

Gamification is the act of introducing game mechanics to activities and resources in order to pro- mote their productivity and enjoyment, potentially transforming something completely into a game altogether. Its no news that human beings like games, from the classic card game to the modern video games, these have always been known to engage their players in activities captivating their atten- tion. Its of no surprise that when a person takes part in an activity of their liking, their performance and productivity towards achieving a goal highly increases and games often stimulate this behavior by requiring their players to do so in order to win or achieve success. The classic understanding of a game points it to an activity for entertainment, however, games have proven they can perform greatly in working as training environments for real world skills, reinforcing behavior and motivating users to perform activities they’d otherwise be not inclined to do, as noted by Karl M. Kapp in his book “The Gamification of Learning and Instruction: Game-based Methods and Strategies for Training and Education”[Kap12]. The idea comes from that players are often required to perform tasks in games that can translate into real world knowledge. They’re actually learning something and not just spending time for the sake of entertainment. Take for instance an MMORPG (Massively Multiplayer Online Role Playing Game) like World of Warcraft[Bli]. One of the core activities of the game is to organize a party of players (numbers can be in the ranges of 5 to 40) in order to defeat a dungeon filled with dangerous enemies. The party has a leader, who needs to guide the other players into success. What simply looks like a game in reality can be seen as a scenario teaching their users about

17 Gamification

Figure 4.1: Screenshot of the Foldit game

leadership skills, which are valuable in the real world, but in the game the players are allowed to go through a trial and error process that happens in as short as a few minutes, while in the real world this could take months to reach the point of finding success or not. It’s also more enjoyable to practice such skills in a scenario with no real world consequences, making a game an efficient solution to learn how to use such abilities. This comes to show how game mechanics have the potential to enhance serious activities into becoming engaging and enjoyable experiences, increasing the productivity of their performers, making them more appealing, creating an environment easier to understand and promoting the adherence of their users.

4.2 Examples of Gamification

Yu-kai Chou demonstrates in his presentation “Gamification to improve our world”[Cho14] a few examples of successful Gamification projects and the impacts they had. This section picks up two of the projects mentioned as they tie in with learning activities and are deemed overall important in explaining the potential this practice has.

Foldit

Displayed in Figure 4.1, Foldit[Han10] is an online game about folding protein structures. Folding a protein can be seen as some sort of a puzzle. Players engage with it in a way where the better

18 Gamification

Figure 4.2: Screenshot of one of the Dragonbox Math Apps

they fold the protein the higher the score they acquire. The solutions produced and their scores are accounted in a leaderboard and the top ones are reviewed by scientists working in the respective field. This project provides an interesting challenge to its players, in return, scientists receive the solutions produced by the crowd, which can potentially be used to solve real world problems. According to Chou, a user playing Foldit managed, in 10 days, to create a solution for a protein structure problem in the AIDS virus that top PhDs in the area had been struggling with for over 15 years.

Dragonbox Math Apps

Targeting a younger audience, the Dragonbox Math Apps[WeW] are designed to help children learn mathematics. One of the apps offers a puzzle game where the goal is to help hatch a dragon that’s inside a box, by removing all other boxes from the play area. In Figure 4.2 a level of the app can be seen. To solve the challenge in the picture, the player can drag or tap the boxes present in the game area. Tapping the green portal box removes it from play, dragging the black two dot box into the white two dot box (or vice-versa) merges them into a green portal box, which can then be removed like the former, leaving only the dragon box in play, effectively completing the level. This objectively works as an analogy to a real algebra math problem. If we consider the dragon box a side of an equation and the other boxes another, making the goal to have them be equivalent and considering the dragon box equal to 0, we can say tapping the green portals means producing a 0, dragging the two dot blocks to each other works as adding -2 to +2, which transforms them into green portal, producing another 0, finally reaching the intended value for the side of the equation

19 Gamification in that isn’t the dragon box, thus solving the problem. This analogy can prove more interesting and simpler to understand for a young crowd than dealing with real numbers on paper that don’t achieve any perceivable consequence, whereas in the app, it helps the dragon come out.

4.3 Gamification for Learning

Different Gamification elements favor different outcomes, for this dissertation the focus is pointed at learning. This section takes a look at some of the Gamification elements fostering learning activities and the influence they produce, as explained by Karl M. Kapp in his book[Kap12].

Storytelling and Analogies

Stories and analogies, like the ones used in the Dragonbox Math Apps[WeW], are good elements that can work as simpler to understand examples to demonstrate concepts. Every learner is dif- ferent, some may be capable of understanding the source materials easily enough, others may strongly benefit for having an alternative situation describing the same lessons in contents more easier to empathize with.

Challenges

Challenging the learners to solve problems, as is the case with Foldit[Han10], works both as a means to stimulate their attention and entertainment, as well as an opportunity to guide them into learning determinate concepts. Solving a challenge creates a sense of accomplishment and satisfaction that motivates the learner to progress further in their quest for learning.

Competition and Cooperation

Working with and against others is an effective tool to create motivation for the learning activity. A competitive social scenario creates opportunity for one to prove their selves and measure their abil- ity against others. Likewise a cooperation scenario can stimulate the willingness for one to learn coming from the need to have the ability to help and contribute to a collective work experience.

Characters and Avatars

Characters and avatars are an effective mean of reinforcing or transmitting behavior. Its been shown that for users engaging in experiences where characters or avatars they can relate to are available, their enjoyment and productivity within the activities raises, due to the increase in inter- est these actors cause. Those characters can even be modeled around depicting behaviors related to the activities, which creates a scenario where users interacting with them are known to be more likely to adopt those behaviors than users doing otherwise.

20 Gamification

Intrinsic and Extrinsic Rewards

Rewards drive motivation. Creating rewards for reaching goals is often a good incentive for learn- ers to engage in related activities. It’s important to be able to understand the two types of rewards to be able to properly use them. Intrinsic rewards can be described as an implicit outcome for engaging in activities, they’re the lesson learned or the reason why someone seeks to practice learning. In gamification for learning, this is naturally the knowledge that’s trying to be conveyed. Extrinsic rewards are actual obtainable rewards, acquiring them is the goal of the activity. These can range from points or credits exchangeable for higher rewards, as is common in games, to even real world tangible objects. Take as an example a student studying for an exam, the knowledge they acquire is their intrinsic reward, the grade of the exam is their extrinsic reward. Intrinsic rewards are the most desirable outcome for a learning activity, so these should be designed around providing them, however, this sort of rewards might often not be perceived by the learners as something in their desire to acquire. This is where extrinsic rewards come in, to create motivation for activities where the learner would otherwise not be interested with engaging in. In simpler terms, extrinsic rewards should act as a motivator to make users pursue the intrinsic ones. This makes it important to balance out the amount of extrinsic rewards that an activity provides, so the goal of the activity remains in acquiring knowledge and not drive learners to single handedly focus on those rewards, losing their focus on the learning practice.

4.4 Summary

Gamification has its origins in entertainment, which is noted for being a kind of activity that capti- vates its audiences attention very well, an aspect desirable to translate into learning environments. Gamification projects can produce outcomes that turn serious activities into highly more produc- tive ones generating the same or better results in a more enjoyable experience. Game mechanics can be beneficial in helping individuals acquire knowledge, learn new skills and adopt behaviors.

21 Gamification

22 Chapter 5

Planning, Methodology and Goals

This chapter describes the overall plans into upgrading DRIVER into a new iteration, the gen- eral development process behind this work, the issues attempted to be tackled and the motivation behind the choices made into the production of the new version of the platform.

5.1 Introduction

DRIVER seeks to make developers lives easier, however, the platform faces low adherence. This is mostly attributed to the user experience with the platform. It currently faces a problem where, instead of making the learning task an easier and more enjoyable one, how the tool currently works, produces in reality a reasonably extra workload for the developers to interact with the platforms features and benefit from them, as noted in Chapter 3.3. Notably, its Learning Path feature doesn’t ever really invoke a sense of “Return of Investment” in the task of producing such paths. Without users making contributions to the Learning Path ecosystem, the platform loses its purpose. This dissertation’s goal was to enhance the appeal of using the platform and its features through the means of Gamification, however, it was deemed important to also rework its foundation to solve its inherent problems, as Gamification is no sorts of a silver bullet to fix everything wrong with an appliance. Therefore, the aim was to restructure the platform in two steps, a Redesign step, focusing on improving functionality and user interaction with the tool and a Gamification step, boosting the appeal and enjoyment of using it.

5.2 Development Methodology

Figure 5.1 displays a rough schedule on how the project evolved over time, showing the method- ology behind producing its work. Next are displayed the descriptions of each of the steps behind realizing this dissertation work.

23 Planning, Methodology and Goals

Figure 5.1: Gantt chart of the dissertation’s development schedule by weeks

Planning (September 16 - September 29)

The work began by experimenting with technologies to determine which ones would be suited for the development process of the platform upgrade. This process was accompanied by the specifi- cation of the requirements for the new DRIVER iteration, modeling its new design and features. It should be noted that a lot of this planning, the literature review and research for it, was performed during a previous semester while undergoing the “Dissertation Planning” course, which precedes the focused development of this dissertation work.

Development (September 23 - December 08)

This encapsulates the entire development process of the platform, it began by implementing the basic functionality, continued into the Redesign and Gamification stages of the process and fin- ished in a period of testing for potential issues and small refinements to the new version of the tool. The development was accompanied by regular meetings with the dissertation’s supervisor in order to ensure consensus between the implementation and plans previously made.

Validation Experiment (November 11 - December 15)

Time dedicated to conceiving and performing the validation experiment in order to obtain ar- guable results regarding this dissertation project. Began with the planning and location of pools of potentially available test subjects, followed by producing the required materials and preparing the platforms necessary for running the experiment and collect its results. Naturally finished by performing the validation experiment itself.

Dissertation Documents (December 16 - January 31)

This period began with reviewing the results of the validation experiment in order to find the inherent conclusions coming from it. It was then that the development progressed into writing this dissertation document, taking the time to evaluate the outcome of the produced work and the conclusions that have been taken from producing it.

24 Planning, Methodology and Goals

Delivery and Presentation (January 27 - February 24)

The final stage of the dissertation project. This involved an initial delivery of the produced doc- uments for evaluation, followed by an open discussion of the produced work, finally concluding the project through the final delivery of the produced documents and artifacts.

5.3 Redesign Plans and Goals

The Redesign aspect of this dissertation aimed at solving inherent problems within the interface and user experience of the original DRIVER platform, in an attempt to solve issues that would preemptively tone down the use of gamification within the project. The following paragraphs note the major plans going into the redesign process and the goals behind each one of those.

Rework the Learning Path feature

The plan revolved around making the path tracking start automatically and reduce the workload necessary into producing appropriate ones. In terms of tracking, for example, when a user would open the platform, their path would start recording immediately. If the user had been away/idle from the the platform for a while, the tool would offer to cut the path and start tracking from scratch. The new system would make use of the user’s bookmarks throughout the documentation sec- tions as cues to perform cuts in a path. Upon the process of storing a path the users would be offered several options to more quickly clean up unnecessary steps from the path and the system would now attribute tags to it based on the pages and sections it contained instead of relying on the user’s input, significantly reducing the workload needed to produce Learning Paths. This had a simple goal, to turn Learning Path production into an easier and simpler process by automating more parts of it, so users would be less likely to see it as an extra “chore” in using the platform.

Replace the mechanisms of sharing paths and searching/browsing for them

It was deemed that sharing a path in the original DRIVER tool doesn’t really promote self benefit, its something that seemingly only creates value to others and not the person producing them. Browsing for Learning Paths doesn’t work exactly as an intuitive thing to do as well, as an user is rather used to searching for concepts and wrappers of them. So the idea was to revamp the feature for publishing Learning Paths, removing the search mechanism and giving users more control over their created content. Now instead, users would keep the paths for themselves, in a personal wallet/storage, creating a sense of ownership and keeping those references for their later use. With this new scenario, paths would inherently influ- ence how the platform displays its contents, instead of exclusively relying on sharing to improve other’s perception of the documentation.

25 Planning, Methodology and Goals

The plan was to create an highlighting feature for documentation sections and pages based on the paths users had stored in the platform. A section present in more paths would have a distinguishable different highlight compared to others. It should be noted that Learning Paths would still be able to be shared, however this time around, these wouldn’t be something that would appear on a search-able catalogue, instead, they’d be artifacts users could directly share with acquaintances, reinforcing the sense of owning and producing content within the platform. The goal behind these changes was to enhance how Learning Paths work within the platform so that they more proactively affect how users navigate content through the documentation. The users would now have direct feedback on the consequences of making use of the feature and potentially be more likely to engage with it.

Improve the user interface and experience

The plan was to introduce a new user interface, easier and more intuitive to use, promoting the visibility of the features available in the platform while preserving the core concepts behind the features of the original DRIVER. An example of what this new interface would do, would be keeping the recommendation widget always visible as opposed to being hidden in a fold-able tab by default, having its knowledge always easily accessible by the user. The goal here was to make it so widgets with important information would always be displayed to users, such as their current learning path, as well as turning the overall interaction with the platform simpler and more comfortable, reducing the necessary steps and transitions needed to make use of its features.

Increase the scalability of the platform

Its been noted in Chapter3, that the original DRIVER platform works as a DokuWiki[GD] plugin. While this has its own merit, the way it works can be rather constricting to the implementation of new features in the platform (such as Gamification ones). The redesign process involved recreating the tool, as such, one part of it involved evaluating existing technologies to choose better suitable ones to meet this dissertation’s goals. In doing this, it was established that a goal for this project would be choosing the technologies and potentially design the platform around enabling future scalability efforts.

5.4 Gamification Plans and Goals

The Gamification aspect of this dissertation focused on turning DRIVER into a more appealing platform, to foster the adherence of its users and to create incentive to make use of its distin- guishing features. The target of Gamification here, was to benefit and improve the quality of the learning experience.

26 Planning, Methodology and Goals

It should be noted that special attention was put into moving motivation towards the use of Learning Paths, the feature that turns the platform unique and learners are not accustomed to using, making developers not rake the potential benefit the feature provides. So the general goal of the Gamification effort put into the new version of the DRIVER plat- form, was to make the tool more likely to captivate its users and drive their motivation , so their productivity in using it would increase. The following paragraphs describe the major Gamification mechanics that were planned in being added to the new DRIVER iteration.

Customizable User Avatar/Character

The plan was to give users an Avatar they could customize, a character or mascot to represent themselves in the platform. This could work, not only as a means to transmitting behavior, but also as a reward engine to create awards for using the platform’s features. For example, as the user would unlock more customization options for their avatar, by using the platform’s features, the avatar’s appearance evolving could work as an indirect indicator of the user’s progress through learning how to use the framework described in the documentation. This would also work as reinforcement to the idea that the behavior behind creating Learning Paths is a rewarding one, intrinsically, as the user acquires and shares knowledge with others by using it, and extrinsically, as the user obtains customization options for their avatar.

Rewards for using the platform’s features : Experience Points

Chapter4 refers to intrinsic and extrinsic rewards as a good motivator to increase an user’s interest and productivity. Again, the intrinsic reward available in DRIVER would be the knowledge on how to use a framework. As a plan to increase the interest in said knowledge, a set of extrinsic rewards was planned. The new version of the platform would introduce a set of frequently regular activities, these would be the sorts of creating Learning Paths or sharing them with other users. In performing these activities the user would acquire points, these points would then be accounted towards progressing levels in the platform and acquiring rewards. These are often regarded in games as “Experience Points” or “Exp” for short. When combining these obtainable points with the previously mentioned Avatar, to create a progress system where the user would seek to collect them in order to obtain new customizations, an extrinsic reward ecosystem would be created so the users felt motivation towards interacting with the platforms features.

Point/Experience Leaderboard

Developers like to compare their skills against others. It was thought interesting to use competition to create motivation for the platform’s users. The experience points previously mentioned could also work as a good indicator for a developer’s work put into mastering the knowledge around using a framework. So the plan here, was that these points could also be tallied in a leaderboard,

27 Planning, Methodology and Goals creating a system where users could compare themselves to each other, possibly increasing their interest in acquiring those points and overall usage of the tool.

Weekly Challenges

As an additional method to obtain experience points and progress through unlocking rewards in the platform, a “Weekly Challenge” feature was planned to be added to it. These would be weekly problems produced by the platform’s administrators that would chal- lenge the developer’s understanding of the framework documented. Users would submit answers, that when deemed correct, would reward them with a large amount of points for their work. These problems and answers would need to be simple enough so the evaluation task could be automated. One possibility was to use Learning Paths as the answers to be submitted, forwarding the impact the feature would have on the user’s usage of the platform. This feature was deemed interesting because it could also work as a good means of guiding users into the knowledge they should obtain. Picture an academic context, a teacher could develop a problem that would require students to navigate knowledge they would potentially need later for a test or exam. In the context of an open and general framework, this could be used to introduce new features or updates for its users.

5.5 Expected Results : Gamification vs Redesign only

The primary goal of this dissertation work was to assert the impact of Gamification as a rein- forcement to improve the productivity and satisfaction of developers using the DRIVER platform. Despite the overall redesign of the tool, this remained the main topic to discuss and experiment with, in order to capture the primary results coming for the conclusions of this project. With this, a validation plan was formulated where a test experiment would run a comparison between two setups of the new DRIVER by having test subjects attempt to solve a problem using the platform. One of the setups would use a fully featured version of the new tool, including the recently added Gamification elements, named here as the “Gamification Setup”. The other setup would host the tool as well, however, this one would have its Gamification features disabled, completely unknown to its users, dubbed the “Redesign Only Setup”. The goal would be to measure the following metrics across the two setups:

• Time to complete the problem 1

• Knowledge intake 2

• Satisfaction / enjoyment in using the platform 3

1If time went over a limit, overall progress would be considered instead. 2Measured using a small form post experiment. 3Same as footnote2.

28 Planning, Methodology and Goals

The expectation behind the validation experiment pointed at the following results:

• The time to complete the problem would be similar across both setups

• The knowledge intake would be similar across both setups, or slightly higher for the Gami- fication Setup

• The satisfaction in using the platform would be higher for the Gamification Setup

The time spent between both setups shouldn’t differ much, one of the setups would potentially covet less interest in the tool, while the other would have significantly more features for the users to get accustomed with, balancing times out. It should be noted that in a scenario where users would’ve already been accustomed to the tool, then the difference in time should’ve favoured the Gamification Setup. The knowledge intake shouldn’t also differ much, since both setups would be equipped with the same features regarding the knowledge of the framework within, however, it was hypothesized that since Gamification would drive the users’ interest for the features of the platform, test sub- jects could potentially be more prone to exploring the tool, hence creating a chance of a higher knowledge intake for the Gamification Setup. Last but not least, the satisfaction in using the platform should be higher for the Gamification Setup, as the game mechanics introduced in this setup would be expected to drive the interest of the test subjects using it, more so than the Redesign Only setup. This metric is particularly important, because satisfaction in performing an activity can usually translate at a more efficient, or at least more focused, involvement in it, which in a long term scenario, could potentially apply positive influence in the other two previously mentioned metrics. The fulfillment of these expectations would then point at the hypothesis that Gamification does indeed work as a good motivator to raise the interest and appeal of using DRIVER, and that its addition worked as a supplement to help at the developer’s productivity at the task of learning how to use frameworks.

5.6 Summary

The methodology behind the development of this dissertation followed an iterating model. It began with the review of literature and technologies together with the planning of the overall project, continued by the production of the new DRIVER tool, followed by its validation experiment and finishing by writing down the resulting documents and conclusions. The planning scheduled a redesign of the original DRIVER’s user interface and experience so that users could better engage with the platform’s features and solve its inherent problems, putting special focus on making the most of its Learning Path component. It also aimed to integrate Gamification into the tool to further increase the appeal in using it. The expected results pointed at Gamification being a good addition to the platform and that it would increase the satisfaction of its users and potentially make interacting with the tool a more productive experience.

29 Planning, Methodology and Goals

30 Chapter 6

DRIVER 2.0 : The New Version

This chapter takes a look into the new DRIVER version produced as the result of this dissertation work. It provides an overview of technologies and architecture behind its design, a walk through its features, the motivation behind their design and how they compare to the original tool.

6.1 Introduction

The focus of this dissertation was to produce a new version of the DRIVER tool. The platform was rebuilt from the ground up during the course of this project as an attempt to improve its perfor- mance regarding the task of helping developers master the use of frameworks. Like the original, its basic structure remains that of a wiki and ultimately aims to house software documentation related to frameworks. The new software resulting of this project was naively named as DRIVER 2.0, a candidate successor to the original one. Figure 6.1 demonstrates the new version of the tool displaying a documentation page and its overall surrounding interface elements. The updated platform inte- grates Gamification practices to raise interest in its working environment, one of the major focus behind its creation, while redesigning the original DRIVER experience with more intuitive con- trols and taking advantage of modern web technologies. DRIVER 2.0 is a single page web application designed for modern and frequently updated desktop web browsers, like Mozilla Firefox[Moz] or Google Chrome[Gooa]. While support for considered “older” desktop browsers and their mobile relatives wasn’t part of the development process of this dissertation, the tool has the necessary ground work in order to extend its compati- bility to these platforms later. A bit counter intuitive to its naming, being a single page application (SPA) doesn’t mean it doesn’t feature multiple pages within itself, which would be against the concept of producing a wiki or documentation. Instead, this is more related to how it technically behaves with web browsers. A deemed multi page application (MPA), which is the most frequent observation when

31 DRIVER 2.0 : The New Version

Figure 6.1: Overview screenshot of the DRIVER 2.0 platform

navigating through most websites, in a rough explanation, will cause web browsers to reload the entirety of its content every time the user visits a new page within the application. Unlike MPAs, an SPA will only reload the portions of its content that need updating when an user prompts for it, this meaning that an SPA, depending on its design, changes its displayed content potentially faster than an MPA and has the ability to keep interface controls on screen while its performing work on the background. It should be noted that an MPA can also have pages with components that update themselves without requiring a complete reload, but that isn’t the general behavior of the whole application, in contrast to an SPA. The truth is that DRIVER 2.0 could have been conceived as either of these types of web applications, each of them has their own advantages, but making it an SPA was deemed more interesting both in tie with the planned Gamification features to be added (due to them possibly benefiting more from “always visible widgets”) and the ability to reduce necessary browser page transitions in order for users to activate certain features, avoiding distractions from its content and keeping them engaged with the platform.

6.1.1 About the figures in this chapter

This chapter contains several figures to illustrate the functionality of DRIVER 2.0. It should be noted that within their content, information for an existing framework can be found, with it being an adapted version of PircBot’s[Mut] documentation provided as an example. This was the framework used during the validation experiment realized under Chapter7 and more details about the framework are provided there.

32 DRIVER 2.0 : The New Version

Figure 6.2: General system architecture of the DRIVER 2.0 platform

6.2 Architecture and Technologies

DRIVER 2.0 follows a simple client-server architecture. The server, coupled with a management system, handles the processing, storage and retrieval of all the information in the system, running on its own machine, while the client can be opened in web browsers from the users’ computers. Naturally, multiple clients can be run simultaneously with the server centralizing all the flow of information. It should be noted that for the purpose of this dissertation, DRIVER 2.0 was only built and tested for running on a single server, however, due to the server process’ stateless implementation, it should be technically possible to distribute processing among several instances without the need for an extensive additional implementation effort. Figure 6.2 provides a simplified overview of the system architecture and the major technologies involved.

6.2.1 Server Architecture and Technologies

The server was programmed under “Node.js”[Ope] (simply referred to as “Node” from now on) a runtime built for producing cross-platform applications using JavaScript. The choice behind using Node was a mix between its increasing popularity and the huge package ecosystem surrounding it, but ultimately boiled down to preference and experience in using it, as other technologies available could be used produce the same results. An HTTP API is implemented by the server software and used in communications with the client. This API was produced by combining a series of packages available for Node, but the heart

33 DRIVER 2.0 : The New Version of it is the “Express”[Nod] framework, which is a popular library for producing web applications in this environment. The API establishes a series of endpoints for the clients to make their requests to. Most of these respond with messages using the JSON format, pairing nicely with both the technologies involved with the client and the server applications, with a few exceptions being during the users’ authentication process. Information is kept by the server using a “PostgreSQL”[Theb] relational database. While other database systems were available, including non-relational ones, the motivation going into choosing this one goes again through preference and previous experience with the technology, although availability of free hosts for this type of database management system played an important role this time around. The server has the capability of hosting the client’s files, so they can be retrieved by web browsers in order to access it, however this isn’t a requirement, as both the server and client can be configured so the latter is provided through other sorts of HTTP servers.

6.2.2 Client Architecture and Technologies

The client was built using “React”[Facb] at its core, a JavaScript framework for building user interfaces for websites and other ends such as mobile applications. React is noteworthy when compared with other frameworks, used to produce single page applications, for its focus on only implementing user interface related components, while competitors often include solutions out of the box for handling extra stuff like the way the applications handle API requests and more. This leaves the developer with a freedom of choice for what to use in the rest of their application. In DRIVER 2.0’s case, the functionality not implemented by React is delegated to native JavaScript features standard to current web browser specifications, such as the mechanism for performing HTTP requests asynchronously. React is often used to produce SPAs, but works fine with multi page applications as well. Yet again, the question behind "Why React?" is answered by mixing popularity, 3rd party packages available and personal preference, as alternative frameworks to produce the client exist, as previously suggested. “Create React App”[Faca] was used to bootstrap the client project, creating a pipeline of de- velopment software used to produce the client so it would better follow web standards and design guidelines for applications built using the framework, while improving development productivity by reducing the margin for errors. “Create React App” is worth mentioning here because it links a series of default dependencies to its projects, that were kept for DRIVER 2.0, with the exam- ple of “Webpack”[Web], a JavaScript module bundler that significantly reduced the payload and complexity of the structure behind DRIVER 2.0’s client. With this in mind, the client is summed up as a web application that takes advantage of the latest web standards and is built majorly around the foundation of what the capabilities currently implemented HTML, JavaScript and CSS specifications enable.

34 DRIVER 2.0 : The New Version

Figure 6.3: Core interface elements of the DRIVER 2.0 platform

6.3 General Features

This section takes a look at the general features available in DRIVER 2.0, the motivation behind their inclusion and how they compare to the original version of the platform. Gamification related features are discussed later in Section 6.4 and are put out of focus here. DRIVER 2.0’s interface is composed of 3 core elements:

• On the left, area (1) of Figure 6.3, is located the sidebar. This is where most navigation tools are present, along with the user’s profile widget and shortcuts for related pages such as the authentication (login) ones.

• On top, area (2) of Figure 6.3, is located the “My Path” menu. This menu keeps a permanent display of the user’s path through the documentation and holds shortcuts for Learning Path related features, such as the user’s wallet or the weekly challenge. This menu can expand itself on demand to reveal the path editing tool for the users needs of saving and customizing their Learning Paths.

• Lastly, area (3) of Figure 6.3 comprises the larger part of the user interface, the content area. This is where documentation is displayed and other pages of the tool are put up for use.

6.3.1 Documentation Pages

Documentation pages, Figure 6.4, are the main type of page available in the platform and are central to its learning purpose. These are meant to host the framework’s information and follow a structure that divides them in sections. Each page begins with an header section that is used

35 DRIVER 2.0 : The New Version

Figure 6.4: A documentation page within DRIVER 2.0

to name it and can be either issued a quick introduction about the page’s topic or hold content immediately altogether, with the remaining sections being usually dedicated to grouping content.

Figure 6.5: A page section within DRIVER 2.0

Looking at Figure 6.5, each section contains a title (1), a set of tags identifying the topics it discusses about (2) and a content box supporting of rich formatting and media (3). Each section can be added to the user’s Learning Path by clicking the “Add to Path” button (4) and is rated with a score (5). A section’s score is calculated by tallying the number of Learning Paths stored in the platform that include this section as part of them. This is used in order to give perception to learners on the most relevant sections within a page so they can locate important pieces of information more quickly. In fact, with the exception of the header of a page, sections with scores above average

36 DRIVER 2.0 : The New Version

Figure 6.6: Comparison between a normal and an highlighted section

(when compared to other sections within the same page) will be highlighted in a distinguishing yellow color, as seen in Figure 6.6. This creates a mechanism where users saving Learning Paths within the platform, will inherently influence the look of the documentation, creating visual guides for others to discover the most important bits of knowledge within, removing the weight put on having to share those paths from the original DRIVER platform.

In comparison to the original tool, these pages work relatively the same, as they both compro- mise sectioned content, sections are identified with tags and the user can add each of them to their Learning Path, the major difference being the scoring and highlighting functions.

Figure 6.7: DRIVER 2.0’s page editor

37 DRIVER 2.0 : The New Version

Looking into how to create pages and their content works in DRIVER 2.0, pages can be cre- ated and modified by approved users through the editing feature seen in Figure 6.7. The editor comprises all the necessary functions for editing the previously mentioned components of a doc- umentation page. Pages in DRIVER 2.0 work on a version basis, pretty much like wiki software. A newly created page begins with an initial version and subsequent edits increment it as a new iteration of the page. All versions of a page can be viewed by all users of the platform, but the system will generally guide them into visiting the latest one. As for content, this is created by writing it and formatting it using an adapted version of the Markdown language, similar to Github Flavoured Markdown’s specification[Git]. This language performs significantly well in writing documentation artifacts for frameworks and is quite popular in related appliances, such as documents found in software repositories. The fact its a format known and frequently used by developers creates further appeal in using it within DRIVER 2.0 and makes migrating documentations potentially easier to it. This being said, this version of Markdown supports several features you’d find in rich text documents (text styling, lists, tables and others), embedding media (such as images) and producing code blocks capable of displaying syntax highlighting. All of these create the necessary foundation to support and enable the use of the design practices for documentation discussed in Chapter2.

6.3.2 Recommendation, Searching and Tag Filters

Figure 6.8: DRIVER 2.0’s recommendation widget

On the sidebar of the new tool, a recommendation widget can be found, as showcased in Figure 6.8. This feature is visible (expanded) by default and recommends users the next documen- tation page or section they should visit based on their current Learning Path, more specifically, the last step found in it. This is used to display content learners might be interested in reading next, based on the paths stored in the system. The widget sorts and scores entries by looking at the pages and sections that usually follow the user’s last step with the other paths in the system, specifically ranking them based on the amount of times they’re registered as a sequence to the current content.

38 DRIVER 2.0 : The New Version

Figure 6.9: DRIVER 2.0’s contextual recommendation feature

Clicking the “Customize” button (1) will prompt the users with a page allowing them to edit the tags used for contextual recommendation (2) as seen in Figure 6.9. When these tags are set, dubbed the user’s “Tag Context”, they will cause a series of features to behave differently, benefiting the topics described by them and acting as some sort of filtering mechanism for the documentation. Beginning with recommendation, the widget will now only consider for its scoring, paths labeled with tags containing the same ones from the user’s Tag Context, allowing the user to select what the system should focus in suggesting to visit next. An example can be seen in Figure 6.10.

Figure 6.10: Recommendation widget being influenced by the user’s selected tags

At the same time, while contextual recommendation is active, documentation pages will gray out sections that don’t feature the tags the user set up as their preferred topic. A sample of the effect is shown in Figure 6.11 when the user sets up the system to focus on the tag “color”. Like the score based highlighting introduced before, marking sections here with “less relevant” styling, allows users to more quickly perceive what content might or not be relevant to their needs, under the assumption tags are being appropriately used by both authors and learners.

39 DRIVER 2.0 : The New Version

Figure 6.11: Matching vs non-matching section while context tags are active

The system also features a simple searching mechanism, built using the database’s full text search capabilities, seen in Figure 6.12. Using the tool allows users to query for pages and sections of the documentation. The results are sorted based on their popularity among the Learning Paths stored in the platform, based on the same scoring system pages display for their sections. Users can setup search to only ask for results in the pool of the latest versions of each documentation page or include older iterations as well. In regards to the previously mentioned Tag Context feature, here those tags can also be used to filter out results that do not contain the topics the user is focusing on. This is however not something that’s “enforced” by having contextual recommendation active, as applying those filters is toggle-able within the tool.

Figure 6.12: DRIVER 2.0’s search tool

40 DRIVER 2.0 : The New Version

As a wrap up, in comparison to the original DRIVER platform, most of these features are unique to this new iteration of the tool, with the common being the existence of recommendation and searching behaving in a natural way relating to the features of both. Recommendation has a similar base behavior as in the original, but here is upgraded into something that’s always in the user’s reach of an eye, increasing its utility, while the user can also tweak its output to better favor their needs.

6.3.3 Path Menu and Editor

Figure 6.13: DRIVER 2.0’s My Path Menu

Figure 6.13 displays DRIVER 2.0’s “My Path” menu (or toolbar), that can be found on the top part of the client’s view port. This UI element holds shortcuts to Learning Path related fea- tures but most importantly the display of the user’s current path and the tool for editing it. On roughly the center of it, can be found the user’s path. This is an horizontal control that lists the sequence of steps involving the user’s exploration through the documentation, which is ultimately the information the platform focuses on providing utility around. Like in the original DRIVER tool, a Learning Path is composed by a series of steps that form a piece of knowledge that can be used to store, retrieve, share and infer information from, both for its creators and other users of the system, as suggested in Chapter3. Here, in DRIVER 2.0, Learning Paths look and behave a bit differently, but work around the same foundation and are central to the tools’ functionality. Unlike the original version of the tool, here Learning Paths will always be tracked, however, the basic behavior of adding steps to them doesn’t differ much from the previous iteration. Steps start being added to a path as soon as a user starts browsing documentation, with every time a user visits a new page being tied with a new step being registered onto the path automatically. Users can also manually add steps for pages and sections by clicking the “Add to Path” button found in all of them, as seen in Figure 6.14.

Figure 6.14: Manually adding a step to Learning Path from a documentation section

41 DRIVER 2.0 : The New Version

Figure 6.15: Visual anatomy of a Learning Path step

A step is composed by an annotation of a few pieces of information. Visible to the user are the names and versions of the pages/sections involved, along with a marker depicting if the step was registered by the user or not, like in Figure 6.15. Internally, these store a few additional pieces of information, these being page identifiers and links. To remind users about the state of their Learning Path, the platform will regularly emit a discreet warning asking for the user to review their progress within, at given intervals based on the amount of steps accumulated. But the major differences from the original tool come from the process of editing and saving those paths. Clicking the “Editor” button on the menu, will expand it to reveal the Path Editor seen in Figure 6.16. This dialogue loads the user’s current Learning Path and allows performing a series of operations with it. Its design focuses on making the process of working with the paths simpler than in the original platform.

Figure 6.16: DRIVER 2.0’s Path Editor

42 DRIVER 2.0 : The New Version

Beginning from the top of the editor, the first region (1) comprises a field to edit the path’s title, which is required for storing it on the platform (but not for the rest of the features) and used as a general indicator of its purpose. Below the title field is noted “Tags will be automatically added to your path after submission”. This is a change from the original DRIVER, where tags had to be manually added by the user. Here instead, the system looks into the steps found within the Learning Path and merges their tags into a new collection for describing the path. It should be noted that in this version of the platform, the user no longer has the ability of searching for paths, which relied on these tags they had to manually set up. The path’s purpose is now delegated to its title and the tags are used for internal recommendation systems, not having as much as an impact in describing the topic of the path. Removing the requirement of having to manually adding tags to a path, makes the effort of creating one much less of “chore” than in the original one. The second region (2) of the editor contains the path itself, along with some pruning controls. First, it should be noted that this internally isn’t the same path as the one displayed on the main toolbar next to "My Path", but a copy of it instead, so the user can perform edition on this one without the risk of losing information by making mistakes. Each step is dotted with 3 pruning tools, that allow focused refinement of the path, from the left to the right these work to “Remove all steps to the left”, “Remove the step itself” and “Remove all steps to the right” respectively. In the third region (3) are the utility controls. Two of them are also related to pruning the Learning Path like the ones from the second region, but behave in ways that will usually perform an higher impact in cleaning up for unwanted steps, making path refinement an easier task. The utility controls from this region are:

• Undo Changes to Path : This button restores the path to its initial state, undoing any changes. It’s effectively the same as re-loading the one from "My Path" into the editor.

• Merge Duplicates: This button prunes path steps by merging duplicates into their earliest appearance. This preserves an user’s manual addition indicator when comparing between an automatic and manual step.

• Remove Automatically Added : This removes all steps that were automatically added by the system when visiting documentation pages.

Last but not least are the major controls from the fourth region (4). These controls are usually the final part of the task of saving or refining a path. The leftmost button, “Load to My Path”, behaves symmetrically to the “Undo Changes to Path” button from the previous section. This one loads any changes made to the path in the editor back to the global path stored under “My Path”. This button is useful in cases where the user already has some ideas on how to refine their current Learning Path but isn’t quite done exploring the documentation yet. The second button, “Reset My Path”, is self explanatory. It wipes out all steps stored from the global path, or from another perspective, loads an empty one to it. This is what the user clicks when the produced path doesn’t have anything the user is interested in keeping or simply wants to start over. The last two

43 DRIVER 2.0 : The New Version buttons are “Save to Wallet” and “Save to Wallet and Reset”, beginning with the latter, this one is a combination of the former and the “Reset My Path” button. What “Save to Wallet” does is basically store the path into the user’s wallet and therefore in the server. The difference across the two final buttons, is that one will just save the path, allowing the user to continue to edit and potentially produce another derivation of it, while the other will clear the system and assume the user is set on approaching another topic. The way path production happens in DRIVER 2.0, overall, aims to create a simpler and more intuitive work flow than in the original, reducing steps and clicks needed to successfully create paths, to result into a more satisfactory experience to build appeal around what the platform is built around and what makes it distinguish itself from others.

6.3.4 Path Wallet and Sharing

Figure 6.17: DRIVER 2.0’s Path Wallet showing a list of the user’s creations

New to the platform in DRIVER 2.0 is the “Path Wallet”. All users have their own wallet, a place to keep the Learning Paths they produce. In the original DRIVER, when a path was created, it would be stored on the platform without no external signs of ownership, meaning the path was an artifact that got sort of “lost” in the system and gave the idea these were meant only for the benefit of others. Keeping authority over the paths creates a feeling of ownership over the content that might create incentive driving users into making use of the feature and also transmits the idea that these pieces of knowledge can be used for one’s own profit. An user’s wallet can be seen in Figure 6.17, showcasing the user’s list of produced paths in the system.

44 DRIVER 2.0 : The New Version

Figure 6.18: Description of a list item from the user’s Path Wallet

Each item in the list matches a Learning Path, an overview of the control can be seen in Figure 6.18. These controls display the title and tags of the path and can be expanded to reveal its contents as well. All of them provide a few options to work with the paths, giving user’s the ability to load, share and delete them. “Load to My Path” replaces the user’s current Learning Path in the Path Menu, this can be used for enabling quicker navigation of its contents (rather than opening the wallet page over and over to click on a step) or even to load it onto the editor to modify it as basis for producing a new one.

Figure 6.19: A Learning Path’s sharing page within DRIVER 2.0

Furthering the idea of owning and sharing content created within the platform, is a dedicated “Share” feature. In the original platform, sharing the path basically meant setting it up to be available for searching in the system, which was earlier explained as something that’s not really considered to promote Learning Paths well. In this version, users can literally share their paths with others, like one would hand a piece a paper to a friend in real life, or send them an image using instant messaging. Activating the “Share” button within a path found on the user’s wallet,

45 DRIVER 2.0 : The New Version will put it up for sharing and direct the user towards a page where they’ll be provided an URL to share with their acquaintances. Visiting this URL will naturally give users the same details found in the wallet along with features they can use to either put the path up as their current track or copy it into their own wallet, as seen in Figure 6.19. This page also displays details about the path’s author, which is useful for promoting the Gamification features that will be discussed later in this chapter. Comparing the behavior of sharing paths in DRIVER 2.0 with the original, the new system is noteworthy for promoting these tools into something that’s harder to miss or ignore than in the previous platform, which is advantageous in reinforcing its purpose.

6.3.5 Tutorial

On a bit of a side note, the platform also includes a tutorial to get users started with the tool. The tutorial, seen on Figure 6.20, introduces users to the concept of Learning Paths and provides a brief overview of the features the platform comprises both in terms of documentation and Gamification.

Figure 6.20: DRIVER 2.0’s Tutorial Page

6.4 Gamification Features

Not part of the original DRIVER platform and added in this version are the Gamification features, one of the core focuses behind the rework of the tool. These features are entirely devoted to increasing the user’s satisfaction and appeal in using the platform, orbiting around the goal of creating incentive for users to interact with DRIVER’s tools and increase their productivity around them. Most of these features play into making the user explore the platform’s features in order to

46 DRIVER 2.0 : The New Version acquire rewards, which are meant to indirectly guide them into better mastering both their use and the knowledge stored within the documentation.

6.4.1 User Avatar and Experience Points

Figure 6.21: DRIVER 2.0 sidebar’s user profile widget

Always displayed on the top left of DRIVER 2.0’s screen is the user’s profile widget, that tracks their experience points and displays their avatar. Figure 6.21 details the elements of the widget. Previously noted on Chapter4, avatars are a good way to reinforce behavior and their inclusion made sense in this dissertation’s scope, as one of the project’s aims was to get users to explore more the activity of producing Learning Paths. With this, every user in the system has a personal avatar, in the form of a customizable cow mascot. The avatar effectively represents the user on the system and creates a presence mechanism that benefits the social component of the platform. The user can always see their avatar, but can also display it to other users while sharing paths or appearing on the leaderboard.

Figure 6.22: Examples of possible user avatar customizations

47 DRIVER 2.0 : The New Version

The avatar can be customized in few a view visual ways, with a few examples showcased in Figure 6.22. Users have access through their profile page, seen in Figure 6.23, to a set of options that allows them to select a bunch of obtainable props to change their mascot’s appearance. These include hats, facial expressions, clothing, fur colors, etc., along with an earn-able title that goes around with the avatar that sort of emulates real world ones (like Engineer, Doctor, etc.). Unlocking these options is the primary reward of the Gamification features found on the platform, and compose part of the extrinsic rewards of the system, also discussed in Chapter4.

Figure 6.23: User’s profile page showcasing the customization options for the avatar

So how does one actually unlock those customization options? Well, it involves a level and experience system. The options are tied to levels in a mechanic that’s pretty analogue to how certain games play. A newly registered user on the platform begins at level 1. By performing set activities, they’ll acquire experience points. Accumulating a set number of these experience points will have them progress to level 2, and so on into level 3, 4, etc.. At each of these levels new customization options will be unlocked, up to a limit where no more are available (though they could be potentially added in subsequent updates to DRIVER). However progressing doesn’t stop once all customization rewards are unlocked, so the user can keep collecting points even after it, possibly tallying them as some sort of score. The user’s level and experience points can be both found on the sidebar widget and the profile page and, indirectly, work as a subtle indicator on how much time the developer spent using the platform and potentially a rough indicator of their progress through mastering the documentation within. For these reasons, levels and points go along pretty well with the intended reward engine behind the avatar.

48 DRIVER 2.0 : The New Version

6.4.2 Daily Activities and Weekly Challenge

Figure 6.24: User’s daily objectives in DRIVER 2.0

Dwelling further into the topic of experience points, the base reward for interacting with Gam- ification elements in DRIVER, the way these are obtained involve two distinguishable types of regular activities. The first kind are daily objectives. These are preset activities that reward experience points for performing certain tasks in the system. They’re designed around both having the user visit the platform regularly and making use of its Learning Path features. As such, rewards for performing them are available in a roughly daily cycle (the actual reset period is less than 24 hours, to give users more freedom for when to return to acquire them). This is a popular mechanism among many games. The fact that these activities endorse the user to visit the tool regularly, can be very beneficial in the context of frameworks, for example, it allows changes to the libraries or security advisories to be acknowledged earlier by developers, since they are more likely to encounter the updates by visiting the documentation regularly. Because they also require users to engage with the platforms features, its a great way to create incentive to explore them. These daily objectives can be found on the user’s profile page, first seen in Figure 6.23 and put in focus on Figure 6.24. The widget tracks their progress through those activities, which consist of the following tasks:

• Login into DRIVER : This rewards the users for visiting the platform, promoting their at- tendance.

• Visit the leaderboard : Asks the user to visit the Leaderboard, as a reinforcement to establish its intended competitive environment, more details in Section 6.4.3.

• Create a path : Encourages the creation of Learning Paths, one of the central features of the system. Provides a larger reward than the previous two to tie in with it being a more “complex” objective.

• Have another user visit one of your paths: In other words, “Share a Learning Path”, which ties in nicely with the previous objective and endorses the social component of the plat- form. Note that this means having someone else visit an user’s path URL and not creating a Learning Path, when looking at the original DRIVER’s definition.

49 DRIVER 2.0 : The New Version

Figure 6.25: DRIVER 2.0’s Weekly Challenge page

The second kind are Weekly Challenges, a sample can be found in Figure 6.25. These chal- lenges are problems written by the platform admins, to which users answer by using one of their saved Learning Paths. As these happen once a week (technically there’s nothing enforcing them to be weekly, but its perceived as a generally good interval to release them) they award large chunks of experience points to users for interacting with them. By submitting a Learning Path, users are rewarded with a base point amount for their participation. If the path they submitted is deemed a correct answer, the reward extends to a larger value. Provided the user is one of the first to provide a correct answer, they’re granted an extra bonus for that as well. Submissions can only happen once per challenge, so emphasis is put into solving it correctly. The answer’s evaluation process works on a semi-automated matter. Foremost, an answer is deemed as correct, when the submitted Learning Path contains determinate a set of pages/sections in an appropriate order, while avoiding unnecessary ones. A slight margin for ignoring obsolete steps is given to ease up the process. With this, the system considers an answer correct, if they match one of a group of possible correct submissions provided by the admins. For example, having sections labeled as numbers, and considering a maximum of 1 obsolete step (actual value in DRIVER is 3), an admin creates a problem where the intended answers can fit the group of [2,4,6] and [3,4,6]. If the user submits a path containing:

• [2,4,6] : the result is a correct answer.

• [4,6,2] : the result is a wrong answer, the order is incorrect.

50 DRIVER 2.0 : The New Version

• [1,3,4,6] or [3,8,4,6] : the result is a correct answer, only 1 obsolete step.

• [1,3,4,5,6] : the result is a wrong answer, 2 obsolete steps (1 and 5).

• [3,4,4,6] : the result is a correct answer, the duplicate of 4 is considered obsolete though.

This feature makes a good fit into DRIVER. First, it allows frameworks authors to guide users into exploring topics within the documentation in a creative way. Through its nature of having to solve correctly a problem, it attempts to boost the learners’ knowledge on the software. Through the fact of it being a challenge with acquirable rewards, it creates incentive for exploration that could possibly be deemed uninteresting by the learning developers. Lastly, because the activity requires the users to engage with the Learning Path feature, it creates additional appeal into making use of the platforms features.

6.4.3 Leaderboard

Figure 6.26: The leaderboard as seen in DRIVER 2.0

The leaderboard is a place where users can showcase their avatars and compare their progress in DRIVER 2.0’s Gamification features, ranking them according to their level and experience points obtained. It’s ultimately a mechanism to promote competition, something that usually drives the developers’ interest, as they often like to measure their skills against peers. Competition can create motivation to learn, from one’s desire to equal or surpass others, giving them reason to engage in the related activities. Because DRIVER 2.0’s experience system can be seen as a sort of indicator of ones investment into the platform, and thus into the framework documented within, it made sense to create a leaderboard using those points to rank users for their achievements. The leaderboard reinforces

51 DRIVER 2.0 : The New Version both the social aspect of the platform, as it gives users a mechanism they can use to compete against each other, and the rest of the features of the system, as it creates further incentive to interact them.

6.5 DRIVER 2.0 with Gamification disabled

Figure 6.27: DRIVER 2.0 with Gamification features disabled

For the sake of testing, and to review the effects of Gamification in its users like in Chap- ter7, DRIVER 2.0’s client includes a setting to disable Gamification elements (only available to admins), a screenshot of the system running with Gamification hidden can be seen in Figure 6.27. While Gamification is toggled off, the following pages and links to them are removed and made inaccessible:

• Profile Page and Daily Objectives

• Weekly Challenge

• Leaderboard

Along with the aforementioned changes, the following features are also modified:

• User’s Sidebar Profile Widget : Avatar, title, level and experience points are hidden from the widget, reducing it only to display the user’s or the platform’s name depending if the user is logged in or not, like seen in the top left corner for Figure 6.27. The profile page link is replaced with a direct link to account related features such as changing one’s password.

• Shared Path Pages : The author’s avatar, title, level and experience points are also hidden from this page, leaving it only with the name of the author.

52 DRIVER 2.0 : The New Version

• Tutorial : Any sections regarding Gamification features are removed and those features are appropriately hidden from the remaining figures (for non Gamification parts).

To sum it up, any traces of Gamification are hidden and the user’s avatar won’t appear any- where while the toggle lasts. It’s noteworthy to mention that turning Gamification off is only a client side thing, on the server side, no data will be lost and experience points will still be tallied for activities the user can still perform, such as logging in or creating a Learning Path, but the user won’t be aware of it as long as the features remain hidden.

6.6 A short note about Cows

Figure 6.28: Images part of an introductory video to the original DRIVER tool

One may be wondering why the avatars used in DRIVER 2.0 take the form of a cow. The reason goes back to the research and an introductory video presented in the home page of the original DRIVER[FA] platform. The concept of Learning Paths is introduced along the lines of being analogue to something cows regularly do in order to find food. In a brief explanation, cows roam around pastures trying to find the best grass to eat. By the time of locating said grass, they’ve naturally left a trail of footsteps behind which connects their home to the place where the ideal food is, as suggested by the part of the video seen in Figure 6.28. Subsequent members of the herd will avoid going

53 DRIVER 2.0 : The New Version through that process again, instead, they will follow their previous colleague’s trail into the good grass, saving time from repeating the task. As far as the analogy goes for frameworks, developers are seen as the cows, the grass is seen as the knowledge on how to use the framework, and the steps within the documentation to acquire said knowledge are mimicked by the road onto the desirable meal, which can be shared by many. The original DRIVER’s concept was built with some inspiration on this behavior. In all honesty, the original intention was include human avatars as part of the new platform, but these proved to be more than time could afford and looking around other places where the avatars exist, one can see they’re quite common. Part of the wishes behind the production of this dissertation was to make DRIVER stand out, so when the idea to use that humble animal as some sort of mascot the users could interact with arrived, it was taken as granted addition due to its history within the platforms development. It’s not that cows are “new” to the concept of building a mascot around, its been done before, but associating this farm animal with an activity that’s entirely driven around technology? That’s unusual, and unusual catches people’s attention. Plus, it’s a usually nice animal, even praised in some parts of the world, so why not?

6.7 Summary

DRIVER 2.0 is the result of this dissertation work’s effort to produce a redesigned version of the original to approach its issues while adding new Gamification features. The new tool is a web application developed using React[Facb], Node.js[Ope] and other technologies, that follows a simple client-server architecture. Several features of it went through a redesign process that focused in making the platform easier and more intuitive to use, putting special focus on making its Learning Path ecosystem more accessible and the overall information available more visible. The platform now includes several features built around an experience point reward system and customizable avatar (in the form of a cow mascot) as part of its Gamification effort, as an attempt to create incentive for users to interact with the platforms tools and as motivation for them to better learn the knowledge behind the framework documentation the system is designed to host.

54 Chapter 7

Validation Experiment

This chapter discusses the validation experiment performed to assert the effects of Gamification in DRIVER 2.0. It walks through the definition and concepts behind the experiment and performs an analysis on the results obtained.

7.1 Introduction

In software engineering, empirical studies help researchers better understand the effects of dif- ferent methods and techniques applied in their projects. When performed using students as test subjects, the product of these studies is often skeptically accepted by the community, due to prob- lems assuring the external validity of the results or the student’s commitment into performing the experiment. When developed using professionals or more seriously committed subjects, there’s usually a better acceptance, but tests involving this audience also suffer from similar problems, although to a smaller degree. Nevertheless, this doesn’t take from the fact that performing these studies can return valuable information for the research, given their goals are properly set and the threats to their validity are appropriately taken into account. Around students, these studies can prove useful to the process of obtaining preliminary results, when coming from a fresh take into a research process. This chapter covers an empirical study built around determining the impact of the Gamification features put into DRIVER 2.0. Are they beneficial? Are they harmful? Or do they not make a difference at all? The test experiment implemented a framework’s documentation into the new version of the tool and used students as test subjects to solve a problem, looking into finding some clues to the answers for these questions.

55 Validation Experiment

7.2 This Experiment vs The Ideal One

As a preliminary note on the experiment, after reading Chapter6, one might be wondering on what would have been the appropriate procedures into performing the validation test. One can notice that this dissertation work involved a significant effort into redesigning the platform, however, this experiment focuses itself on the Gamification aspect of the work. Likewise, it can be seen that the Gamification features implemented favor recurring use of the tool over time, instead of a single session activity. The experiment compares two versions of the upgraded platform and the impact of Gamifica- tion being present (or not) in a test with a span of roughly less than two hours. While observing the results of Gamification being added to the system were always the higher end objective of this work, the appropriate procedure would go by comparing the effects of the redesign coming from the original version of the platform as well. In a similar sense, a more adequate test to verify the hypothesis’ built around the added Gamification features, would last for an extended amount of time, perhaps a few weeks. There’s a two primary reasons why this didn’t happen. The first is the lack of time. The de- velopment process of this dissertation work happened through the course of loosely one academic semester (ignoring the “dissertation preparation” period, which focused more into research). Mak- ing an extended test wouldn’t really find a place in the work schedule and fixing the issues sug- gested in Chapter3 to get the original platform to function in its entirety within modern browsers, would also condition an additional work period. The second reason walks around the nature of extending the test period. Finding available subjects for the performed test was complex enough, for a longer period, this condition would have aggravated significantly. In addition to that, control- ling the validity of an extended test experiment would also be difficult, as it would be a lot more susceptible to the influence of external factors, since there’s no reasonable way of keeping the test subjects isolated.

7.3 Experiment Details

The experiment divided subjects into working with two distinct deployments of DRIVER 2.0, one with Gamification features enabled and the other without them, in order to solve a problem using a framework. A protocol was followed in order to ensure all students were at the same stage of the experiment prior to advancing them into the following one, to reduce risk of external interference. During this process, questionnaires were distributed to the students in order to obtain feedback and active supervision of the students’ behavior was applied. The overall design of the activity focused on minimizing potential threats to validation.

7.3.1 Test Procedure

Figure 7.1 provides an overview of the overall procedure followed into performing the validation experiment.

56 Validation Experiment

Figure 7.1: Overview of the validation experiment protocol

Prior to the experiment date, students were informed of the event taking place. They were given no information about the contents of the experiment (no information on the platform, frame- work or the problem to be solved) other than they were being asked to test something new to them. Progressing into the experiment itself, this began by waiting for students to arrive and form work pairs (groups of 2) for the task. Unknown to them, the formed pairs were being divided into two larger groups, one that would use the “No Gamification” deployment of the platform and the other that would use the “Gamification” one. This sorting took no biasing other than strategically selecting the pairs based on their position in the work room, so they’d be less likely to notice some of their peers were using a different deployment of the platform. Shortly after this, the pre-experiment questionnaires were distributed to the students in order to obtain feedback on their background experience going into the activity. As soon as all forms were filled, anonymous credentials were distributed to them, indicating the names they should assume within the software and the location of their assigned deployment on the web. The credentials were anonymized (accounts with preset names) so that students wouldn’t recognize their peers within the platform, remaining strangers to each other.

57 Validation Experiment

This was followed by instructing the students to login into the platform and setup necessary tools for solving the problem (clients, code editors, etc.), while asking them to review a set of rules they should obey during the experiment. These rules were promptly announced by the supervisors before the subjects even had access to the page enumerating them and followed the lines of “don’t progress into the next stage of the experiment until told so” and “don’t obtain information from external sources or the original documentation”. These, of course, aimed into reducing potential threats to validation. The experiment then followed into a brief period where the students were asked to review the platform’s tutorial, in order to get an initial acquaintance its features. As a reminder, the tutorial is tailored to include (or not) a guide on the Gamification features, depending if they are enabled or not. After reading the tutorial, the users were directed to the problem page and given the freedom to explore the platform’s contents at will, in order to solve the exercises proposed in the problem page. This period lasted for the majority of the experiment, but was limited to the time available. The system implemented some auxiliary features to the experiment in order to track the time spent solving the problems. If the students managed to solve all the problems before the time expired, that duration would then be recorded as part of solving the problem. Otherwise, their overall progress in terms of solved steps would be taken into account instead. The experiment then finished with students filling in the adequate post-experiment question- naire for the deployment they used, as for to acquire the remaining results necessary to close the activity.

7.3.2 Test Subjects and Grouping

The test subject pool was composed by 16 students from the 2nd year of the “Integrated Master’s in Electronics and Computer Engineering” (In Portuguese: “Mestrado Integrado em Engenharia Electrotécnica e de Computadores”), another course from FEUP. At this point in the course, stu- dents have learned how to work with a few programming languages, including Java, and using frameworks within, which was adequate for the intended test experiment. Subjects were set up into pairs, with the intent of making the framework learning process faster and more productive in order to accommodate for the short time frame available to perform the activity. The students were initially divided into two groups of 8 (4 pairs of 2), however, during the course of the experiment, observation revealed one of the pairs was out of touch with the activity, not performing as requested, engaging in other activities in their personal devices. Fortunately, there wasn’t any perceived interference with the other students coming from this pair. Their results were still tallied and included among the others’ in the table found in AppendixE, but they were consequentially discarded from the statistical analysis performed later in this chapter. This pair is labelled as “Fire” within the table of results. With this, there were effectively two groups of students:

58 Validation Experiment

• No Gamification (NG): A group composed of 8 students or 4 pairs, that used the deploy- ment of DRIVER 2.0 with Gamification disabled.

• Gamification (GF): A group composed of 6 students or 3 pairs, that used the deployment of DRIVER 2.0 with Gamification enabled.

7.3.3 Chosen Framework

Given the students’ experience in using Java, the selected framework naturally pointed to be- ing something that would integrate with the , removing potential interfer- ence from the subjects not knowing how to use the dependant tools in order to develop with the framework. With this, the choice was to use PircBot[Mut] and mirror its documentation within DRIVER. PircBot is a Java IRC (Internet Chat Relay) framework, developed by Paul Mutton, with a simple and easy to use API in order to produce IRC bots or clients. The library was simple enough for the subjects to make use of it within the available time frame, while being complex enough to not make the task trivial. Additionally, at the time of this dissertation, because IRC was no longer a popular online communication mechanism, it was unlikely that the subjects would’ve had previous contact with the framework, which proved ideal for the experiment.

7.3.4 Test Environment

The experiment happened in one of the student’s class rooms. A familiar environment for them, ideal to reduce the influence of external factors to the activity. The room was equipped with computers but the subjects were also allowed to work within their personal laptops as long as they’d comply with the stipulated rules and kept the exercise within a single device (1 per pair). Using the class room’s computers vs the personal laptops wasn’t deemed to be different, the idea was that the students would work where they’re more comfortable in. In addition to DRIVER 2.0 and the tools to develop in Java, the students were asked to interact with two additional pieces of software. The first was an IRC server produced using InspIRCd[Ins]. This would be the server subjects would use to connect their bots into, for development and to test their functionality. The server was configured so it wouldn’t inflict restrictions on the experiment’s process, as public solutions would raise issues because of common anti-spam policies, limiting the amount of connections and messages that could be sent from a single location, making the experiment impossible. Paired with the IRC server, students were given an IRC client to connect with. This was a web IRC client named KiwiIRC[Dar], which didn’t require much setup other than providing the server’s address. This client had two purposes in the activity. The major purpose was for the subjects to send messages to their bots so they could test the functionality asked to be developed as part of the problem. The other purpose was to share Learning Paths created in the platform, in case the students had produced them.

59 Validation Experiment

Students weren’t allowed to transmit direct information to their peers, but they were allowed to send them Learning Paths, which was one of the features part of DRIVER 2.0. Both the NG and GF groups had different channels in the IRC server for sharing these paths, not interacting with each other, along with a channel for each group to develop their own bot.

7.3.5 Pre-experiment Questionnaire

At the beginning of the experiment, a questionnaire (AppendixA) was given to the students in order to ascertain their background experience going into the validation test. In this sort of em- pirical tests, it’s important that both groups performing the experiment don’t differ significantly from each other in terms of knowledge and practice relating to the activity being performed. This said, the aim was rule out potential outliers going into the experiment and finding out who had previously worked with the framework in question. The quiz contained a series of questions where the students would answer using Likert’s scale[Lik32]. This declares a set of psychometric affirmations where the subject would have to indicate the one they identify themselves the most with. In this case, the scale was comprised from 1 to 5 in terms of how the subject agreed with the questions, where these values would mean:

1. Strongly disagree

2. Disagree

3. Neither agree or disagree (Neutral)

4. Agree

5. Strongly agree

7.3.6 Tasks to be solved using the framework

The problem given to the students (AppendixB) during the experiment had them create an IRC bot that would perform a series of tasks, divided into steps along its course. The problem was designed around having the subjects make use of the platform’s features when exploring through the documentation, and the benefit from doing so would increase as the development progressed. The problem begun with the implementation of a set of features described in PircBot’s tutorial and diverged into making use of other capabilities of the framework.

7.3.7 Post-experiment Questionnaire

At the end of the experiment, a second questionnaire was given to the students to obtain the feedback coming from the validation experiment. The quiz was available in two forms, one for the “No Gamification” group (AppendixC) and another for the “Gamification” group (AppendixD). Both forms were composed of the same groups of questions, targeting to discover the resulting differences in between the two environments. The GF group’s quiz had an additional segment

60 Validation Experiment of questions aiming to acquire insight on the students’ opinion of the Gamification features they interacted with.

7.4 Results Analysis

Following up the questionnaires performed as part of the test experiment, this section runs a sta- tistical analysis on the acquired results and reviews the conclusions coming from them. As a reminder, the results tallied are included in AppendixE and the subjects labeled as the “Fire” group weren’t considered for these statistics for the reasons mentioned in Section 7.3.2.

7.4.1 Statistical Relevance

In order to determine statistical relevance (or not) in the majority of the analysis’ below, the fol- lowing process is applied to the results coming from the questionnaires answers. Considering the following:

• H0 : the null hypothesis

• H1 : the alternate hypothesis

• NG : the “No Gamification” group of students and related results

• GF : the “Gamification” group of students and related results

• ρ : the probability estimator of wrongly rejecting H0

And using the following set of hypothesis:

• H0 : NG = GF : The groups’ results are equivalent to each other

• H1 : NG 6= GF : The groups’ results are different from each other

• H1 : NG < GF : The measure in the NG group is lower than in the GF one

• H1 : NG > GF : The measure in the NG group is higher than in the GF one

For each applicable answer, the results for both groups were compared using the Wilcoxon- Mann-Whitney test[HW99] using a significance level of 5%. In simple terms, while using this non-parametric test, the probability values of ρ ≤ 0.05 are considered significant and ρ ≤ 0.01 considered highly significant. In other words, when ρ is less or equal to those values, the hypoth- esis of the groups having similar results (H0) can be rejected with statistic significance, opening way to consider the alternative hypothesis (H1) instead. Likewise, if the value of ρ is above those values, H0 cannot be discarded, meaning the groups share a high degree of similarity. In each of the answers analysed using this method, an alternative hypothesis is proposed and reviewed accordingly.

61 Validation Experiment

No Gamification Gamification Statistics 1 2 3 4 5x ¯ σ 1 2 3 4 5x ¯ σ H1 W ρ BG1.1 2 1 4 1 0 2,50 1,00 2 1 2 1 0 2,33 1,11 6= 43 0,852 BG1.2 0 0 3 5 0 3,63 0,48 0 1 3 2 0 3,17 0,69 6= 36,5 0,282 BG1.3 1 1 4 2 0 2,88 0,93 2 0 3 1 0 2,50 1,12 6= 41 0,662 BG1.4 0 0 1 4 3 4,25 0,66 0 0 1 5 0 3,83 0,37 6= 36,5 0,282 BG1.5 6 0 2 0 0 1,50 0,87 5 0 1 0 0 1,33 0,75 6= 43 0,852 BG1.6 7 1 0 0 0 1,13 0,33 6 0 0 0 0 1,00 0,00 6= 42 0,755 BG1.7 7 1 0 0 0 1,13 0,33 6 0 0 0 0 1,00 0,00 6= 42 0,755 BG2.1 0 0 0 0 8 5,00 0,00 0 0 0 0 6 5,00 0,00 6= 45 1

Table 7.1: Statistics of the “Background” part of the pre-experiment questionnaire

7.4.2 Background

This section analyses the background feedback obtained from the experiment’s pre-questionnaire and references the data found in Table 7.1. The goal of these questions was to characterize both groups in terms of knowledge and experience going into the activity.

BG1.1 - I have considerable experience using software frameworks

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,852) in the scores for the “No Gamification” group (x ¯ = 2,50, σ = 1,0) and the “Gamification” group (x ¯ = 2,33, σ = 1,11). Both groups consider to have a similar degree in terms of previous experience of using frame- works. While the scores tend to be negative in terms of the ability to use frameworks, it should be noted the test subjects come from an area where the term isn’t often deployed and certain program- ming lingo might be unfamiliar to them, causing the low scores. Based on the subject’s origin, it’s at least known they’ve had contact with frameworks before.

BG1.2 - I have considerable experience using the Java programming language

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,282) in the scores for the “No Gamification” group (x ¯ = 3,63, σ = 0,48) and the “Gamification” group (x ¯ = 3,17, σ = 0,69). As expected, both groups consider to have similar knowledge on how to make use of the Java programming language. There’s a slight tendency pointing towards the NG group feeling more comfortable in the matter, but there isn’t a major discrepancy between them.

BG1.3 - I have considerable experience using 3rd party libraries within my Java applications

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,662) in the scores for the “No Gamification” group (x ¯ = 2,88, σ = 0,93) and the “Gamification” group (x ¯ = 2,50, σ = 1,12). This question is sort of a subset of BG1.1, the goal was to determine the groups’ experience in making use of frameworks within the Java programming language. Both consider to be on

62 Validation Experiment reasonably the same ability level and the low scoring can be attributed to the same reasons found in BG1.1.

BG1.4 - I have considerable experience using Integrated Development Environments (IDE)

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,282) in the scores for the “No Gamification” group (x ¯ = 4,25, σ = 0,66) and the “Gamification” group (x ¯ = 3,83, σ = 0,37). The scoring indicates both groups have similar experience using IDEs, but again, there’s a small tendency indicating the NG group feels more comfortable with the topic, similar to what was seen in BG.1.2.

BG1.5 - I have considerable experience using services (IRC)

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,852) in the scores for the “No Gamification” group (x ¯ = 1,50, σ = 0,87) and the “Gamification” group (x ¯ = 1,33, σ = 0,75). Both groups consider to be inexperienced in using IRC services in a similar way. The inexperi- ence wasn’t deemed problematic, as the system works pretty analogue to how modern chat rooms behave, which the students are very likely to be accustomed with, plus, they wouldn’t be required to use any advanced features and the problem description guided them into using something less orthodox. This was expected to happen.

BG1.6 - I have considerable experience developing IRC bots

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,755) in the scores for the “No Gamification” group (x ¯ = 1,13, σ = 0,33) and the “Gamification” group (x ¯ = 1,00, σ = 0,00). Both groups equally considered to be inexperienced in developing IRC bots, which following the previous question, was pretty much expected. This wasn’t deemed a problem either because the process behind developing one in PircBot didn’t stray much further from understanding a few input/output mechanisms, which is usually part of a developer’s base knowledge on how to use a programming language.

BG1.7 - I have considerable experience developing chat bots (for any kind of chat service)

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,755) in the scores for the “No Gamification” group (x ¯ = 1,13, σ = 0,33) and the “Gamification” group (x ¯ = 1,00, σ = 0,00). Again, both groups consider themselves equally inexperienced in developing chat bots. This question was mostly to figure out who had previously came in contact with chat bot development, as the process is pretty similar among different services, potentially giving an advantage in the experiment’s development, but that wasn’t the case for any of the subjects.

63 Validation Experiment

BG2.1 - I’ve never used or had contact with the PircBot framework

Let H1 : NG 6= GF, there was no significant difference (ρ = 1,00) in the scores for the “No Gamification” group (x ¯ = 5,00, σ = 0,00) and the “Gamification” group (x ¯ = 5,00, σ = 0,00). The conclusion here is simple, nobody had used the framework prior to the test experiment, an ideal scenario.

Background : Summary

Both groups were reasonably balanced in terms of background experience coming into the test activity. There was however, a slight indication that the “No Gamification” group was more com- fortable with the software tools that were going to be used. This difference wasn’t major but it could potentially have favoured that group’s scoring in the development process and knowledge intake about the framework categories in the post-experiment results.

7.4.3 External Factors

This section analyses the external factor feedback obtained from the post-experiment questionnaire and references the data found in Table 7.2. The goal of these questions was to ascertain if external factors caused a significant impact on the student’s work in the development activity.

No Gamification Gamification Statistics 1 2 3 4 5x ¯ σ 1 2 3 4 5x ¯ σ H1 W ρ EF1.1 0 3 2 3 0 3,00 0,87 2 3 0 1 0 2,00 1,00 > 32 0,108 EF1.2 0 0 0 7 1 4,13 0,33 0 0 0 5 1 4,17 0,37 < 59 0,95 EF1.3 0 0 1 4 3 4,25 0,66 0 0 0 2 4 4,67 0,47 < 52 0,345 EF1.4 5 0 2 1 0 1,88 1,17 4 1 0 1 0 1,67 1,11 6= 43,5 0,852

Table 7.2: Statistics of the “External Factors” part of the post-experiment questionnaire

EF1.1 - I found the whole experience environment intimidating

Let H1 : NG > GF, there was no significant difference (ρ = 0,108) in the scores for the “No Gamification” group (x ¯ = 3,00, σ = 0,87) and the “Gamification” group (x ¯ = 2,00, σ = 1,00). Both groups felt equivalently intimidated by the experiment, not being there a major difference in between them. However, the scoring tilts a bit in favor of the hypothesis that the NG group would feel more intimidated, possibly because of the lack of the Gamification features, increasing tension in performing the activity, but there isn’t enough evidence to disprove the equivalence between them. In terms of scoring, a certain degree of intimidation is to be expected as the experiment was a rather unusual activity to the students’ routine.

64 Validation Experiment

EF1.2 - I enjoyed programming and developing in the experiment

Let H1 : NG < GF, there was no significant difference (ρ = 0,95) in the scores for the “No Gamification” group (x ¯ = 4,13, σ = 0,33) and the “Gamification” group (x ¯ = 4,17, σ = 0,37). Both groups enjoyed programming and developing in the activity to a similar degree with no noticeable differences among them. This comes as a reinforcement to the point that the results in the previous answer are equivalent across the groups and that the scoring might not reflect anything substantial in terms of external impact.

EF1.3 - I would work with my partner for this experiment again

Let H1 : NG < GF, there was no significant difference (ρ = 0,345) in the scores for the “No Gamification” group (x ¯ = 4,25, σ = 0,66) and the “Gamification” group (x ¯ = 4,67, σ = 0,47). This question was built around determining if any of the pairs could have been a bad match up for the experiment. In general, there wasn’t any sort of mismatching in terms of co-operability among the students and both groups fared pretty similar in this area.

EF1.4 - I kept getting distracted by other colleagues outside my group

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,852) in the scores for the “No Gamification” group (x ¯ = 1,88, σ = 1,17) and the “Gamification” group (x ¯ = 1,67, σ = 1,11). There were no major differences across groups here either. Interference caused by peers was considered low and rather equivalent across the two parts.

External Factors : Summary

There was no significant interference caused by external factors into the experiment’s course. The activity was taken seriously to acceptable levels across both groups and the impact of these factors remained equally balanced around them.

7.4.4 Overall Satisfaction

This section analyses the overall satisfaction component of the post-experiment questionnaires and references the data found in Table 7.3. The goal of these questions was to determine which of the deployments of DRIVER 2.0 the students enjoyed more.

OS1.1 - Overall, the setup of this experiment was suitable for solving every task presented

Let H1 : NG 6= GF, there was no significant difference (ρ = 0,583) in the scores for the “No Gamification” group (x ¯ = 3,88, σ = 0,93) and the “Gamification” group (x ¯ = 3,67, σ = 0,47). Both groups found the setup equally suitable for the experiment and demonstrated a positive tendency towards it. This means that in both scenarios the subjects displayed reasonably the same learning curve to get accustomed with the new environment’s tools.

65 Validation Experiment

No Gamification Gamification Statistics 1 2 3 4 5x ¯ σ 1 2 3 4 5x ¯ σ H1 W ρ OS1.1 0 1 1 4 2 3,88 0,93 0 0 2 4 0 3,67 0,47 6= 40 0,573 OS1.2 0 3 1 1 3 3,50 1,32 0 3 0 2 1 3,17 1,21 < 41 0,662 OS1.3 1 1 1 4 1 3,38 1,22 3 2 0 1 0 1,83 1,07 > 30,5 0,059 OS1.4 1 6 1 0 0 2,00 0,50 3 1 2 0 0 1,83 0,90 > 41,5 0,662 OS1.5 0 0 2 3 3 4,13 0,78 0 0 0 5 1 4,17 0,37 < 45 1 OS1.6 0 1 1 6 0 3,63 0,70 0 0 3 2 1 3,67 0,75 < 43,5 0,852 OS1.7 0 1 1 3 3 4,00 1,00 0 0 2 1 3 4,17 0,90 < 58 0,852

Table 7.3: Statistics of the “Overall Satisfaction” part of the post-experiment questionnaire

OS1.2 - I found the documentation available to be sufficient

Let H1 : NG < GF, there was no significant difference (ρ = 0,662) in the scores for the “No Gamification” group (x ¯ = 3,50, σ = 1,32) and the “Gamification” group (x ¯ = 3,17, σ = 1,21). In the two deployments, the test subject’s opinions on the installed documentation in the plat- form pointed to similar results. In both cases roughly half of the subjects scored it negatively and the other half positively, which creates an indicator the suitability of the documentation wasn’t influenced by the difference in features.

OS1.3 - I felt the need to have access to more information on how to use the framework from sources outside of DRIVER

Let H1 : NG > GF, there was no significant difference (ρ = 0,059) in the scores for the “No Gamification” group (x ¯ = 3,38, σ = 1,22) and the “Gamification” group (x ¯ = 1,83, σ = 1,07). Although the test doesn’t pass in the significance check, it should be noted that the estimated probability is very close to the 0,05 threshold, with the NG group displaying a reasonably higher feeling on having to access information outside of the platform, like hypothesized by H1. In theory, the actual need of seeking information outside the tool should be similar in both scenarios, because the documentation is the same, however, the unsatisfactory scores coming from the NG group might be an indicator that the Gamification features might have played a role in capturing the user’s focus into the platform as desired.

OS1.4 - Despite my experience, the tools available delayed my work considerably in using the PircBot framework to solve the problem proposed

Let H1 : NG > GF, there was no significant difference (ρ = 0,662) in the scores for the “No Gamification” group (x ¯ = 2,00, σ = 0,50) and the “Gamification” group (x ¯ = 1,83, σ = 0,90). Both groups display a general feeling, and similar in that regard, that the tools available to solve the problem were enough and didn’t have a negative impact in the development process.

66 Validation Experiment

OS1.5 - I had fun and enjoyed using DRIVER.

Let H1 : NG < GF, there was no significant difference (ρ = 1) in the scores for the “No Gamifi- cation” group (x ¯ = 4,13, σ = 0,78) and the “Gamification” group (x ¯ = 4,17, σ = 0,37). The two scenarios are similar, and the students seem to have enjoyed using DRIVER during the validation experiment. There is some caution to take around this question however, as the subjects could potentially be describing their overall satisfaction with the experiment and not the tool itself.

OS1.6 - I would recommend DRIVER to a colleague or acquaintance

Let H1 : NG < GF, there was no significant difference (ρ = 0,852) in the scores for the “No Gamification” group (x ¯ = 3,63, σ = 0,70) and the “Gamification” group (x ¯ = 3,67, σ = 0,75). There was a similar and positive feeling from both sides into recommending DRIVER to peers.

OS1.7 - I would use DRIVER again

Let H1 : NG < GF, there was no significant difference (ρ = 0,852) in the scores for the “No Gamification” group (x ¯ = 4,00, σ = 1,00) and the “Gamification” group (x ¯ = 4,17, σ = 0,90). Much like the previous answer, a positive feeling into using DRIVER again came similarly from the both sides of the experiment. Its somewhat interesting that the subjects scoring favoured themselves using the platform again more than recommending it to peers.

Overall Satisfaction : Summary

In terms of satisfaction, the feeling is pretty balanced and positive across the two test environ- ments deployed during the experiment. There’s no clear indicator that Gamification increased the subject’s satisfaction in performing the activity, but the results from question OS1.3 might point towards it.

7.4.5 Development Process

This section analyses the development process component of the post-experiment questionnaires and references the data found in Table 7.4. The questions presented in this group follow the premise of evaluating how difficult the students found the proposed steps of problem. The ques- tions are preceded by “It was hard to find out how to use the framework to complete the objective number N of the problem...” and there’s one of them for each step of the exercise. Subjects were asked to only fill in the questions for the objectives they had solved, which naturally translates into some of them having smaller pools of individuals than the others. Later in this section, an analysis concerning the students’ overall progress through the problem’s tasks, comparing the results in the two environments, is also performed.

67 Validation Experiment

No Gamification Gamification Statistics 1 2 3 4 5x ¯ σ 1 2 3 4 5x ¯ σ H1 W ρ DP1.1 2 6 0 0 0 1,75 0,43 1 1 3 1 0 2,67 0,94 > 46 0,081 DP1.2 3 5 0 0 0 1,63 0,48 2 3 1 0 0 1,83 0,69 > 56,5 0,662 DP1.3 2 2 0 2 0 2,33 1,25 1 2 1 0 0 2,00 0,71 > 21 0,914 DP1.4 2 2 2 0 0 2,00 0,82 1 1 2 0 0 2,25 0,83 > 31 0,762 DP1.5 1 3 0 0 0 1,75 0,43 2 1 1 0 0 1,75 0,83 > 17,5 0,886 DP1.6 2 0 0 0 0 1,00 0,00 1 1 0 0 0 1,50 0,50 > 4 0,667 DP1.7 1 1 0 0 0 1,50 0,50 1 1 0 0 0 1,50 0,50 > 5 1 DP1.8 1 1 0 0 0 1,50 0,50 0 0 0 0 0 0 0 n/a n/a n/a

Table 7.4: Statistics of the “Development Process” part of the post-experiment questionnaire

DP1.1 - Objective 1) Connect to the irc server and join the channel

Let H1 : NG > GF, there was no significant difference (ρ = 0,081) in the scores for the “No Gamification” group (x ¯ = 1,75, σ = 0,43) and the “Gamification” group (x ¯ = 2,67, σ = 0,94). The test indicates there was no significant difference across the groups, however, the calculated probability of incorrectly rejecting that hypothesis is very low. In general, students from the GF group seem to have had more difficulty in starting the development process. This could be attributed to a number of factors. One possibility is that the hypothesis intro- duced in the Pre-questionnaire that the NG group was more comfortable with the technologies could’ve come into play. Another is that the Gamification elements could have improperly cap- tured the initial attention of the subjects and dragged the startup process more than necessarily. Its difficult to know for sure, but this goes a bit against the expectations set for the Gamification project.

DP1.2 - Objective 2) Reply with the current time on command

Let H1 : NG > GF, there was no significant difference (ρ = 0,662) in the scores for the “No Gamification” group (x ¯ = 1,63, σ = 0,48) and the “Gamification” group (x ¯ = 1,83, σ = 0,69). There was a similar degree of difficulty perceived by both groups and they seem to agree that this objective was particularly difficult.

DP1.3 - Objective 2.1) Update the style of the reply from Objective 2)

Let H1 : NG > GF, there was no significant difference (ρ = 0,914) in the scores for the “No Gamification” group (x ¯ = 2,33, σ = 1,25) and the “Gamification” group (x ¯ = 2,00, σ = 0,71). This is the first instance of an objective that wasn’t fulfilled by all the subjects. In both groups, one of the pairs didn’t accomplish this iteration. As for the ones who solved it, there’s a common view that the exercise wasn’t difficult.

68 Validation Experiment

DP1.4 - Objective 3) Have the bot say a greeting when it joins a channel

Let H1 : NG > GF, there was no significant difference (ρ = 0,762) in the scores for the “No Gamification” group (x ¯ = 2,00, σ = 0,82) and the “Gamification” group (x ¯ = 2,25, σ = 0,83). In this step, again, 1 pair from each group failed to accomplish it. As for difficulty, its again perceived similarly across the two environments towards not being particularly hard.

DP1.5 - Objective 4) Have the bot react to an action message with another

Let H1 : NG > GF, there was no significant difference (ρ = 0,886) in the scores for the “No Gamification” group (x ¯ = 1,75, σ = 0,43) and the “Gamification” group (x ¯ = 1,75, σ = 0,83). An additional pair from the NG group skipped this step, in contrast to the previous question. As far difficulty goes, the similarity across groups persists and favors the exercise not being hard.

DP1.6 - Objective 5) Have the bot leave the channel on command

Let H1 : NG > GF, there was no significant difference (ρ = 0,667) in the scores for the “No Gamification” group (x ¯ = 1,00, σ = 0,00) and the “Gamification” group (x ¯ = 1,50, σ = 0,50). Only 1 pair from each group accomplished this iteration of the problem. They both share roughly the same opinion regarding it in terms of difficulty.

DP1.7 - Objective 6) Have the bot join the channel on command

Let H1 : NG > GF, there was no significant difference (ρ = 1,00) in the scores for the “No Gamification” group (x ¯ = 1,50, σ = 0,50) and the “Gamification” group (x ¯ = 1,50, σ = 0,50). This step shares the same condition as the previous one. Similar opinions, only 1 pair from each side of the experiment.

DP1.8 - Objective 7) Have the bot display a list of users with prefixes

The only students to complete this objective were a pair from the “No Gamification” group and they didn’t find it troubling. A comparison can’t be made in this case in terms of perceived difficulty across the setups.

Overall progress through the problem

No Gamification Gamification Statistics Min Maxx ¯ σ Min Maxx ¯ σ H1 W ρ 3 8 4,75 2,05 2 7 4,67 2,05 < 43 0,852 Table 7.5: Progress statistics of the tasks solved in the validation experiment

69 Validation Experiment

Initially, the idea behind the test process, was to measure the time subjects took to fulfill the entirety of the proposed problem. As most didn’t finish it within the stipulated time limit, the amount of steps solved was considered instead. To assist in the comparison of the overall progress, the amount of successfully fulfilled ob- jectives from each pair was tallied and underwent the same comparison test from the previous sections. The summary of the results is accounted in Table 7.5

Let H1 : NG < GF, there was no significant difference (ρ = 0,852) in the amount of steps solved in the problem for the “No Gamification” group (x ¯ = 4,75, σ = 2,05) and the “Gamifica- tion” group (x ¯ = 4,67, σ = 2,05). The average amounts from both groups point towards similar results. The equal standard deviation across them is also characteristic of both groups mirroring each other in terms of pairs that fulfilled less and more objectives. Its noteworthy that the NG group minimum and maximum are 1 above the GF ones, but this is most likely derived from the initial struggle suggested by the results of DP1.1.

Development Process : Summary

In general, the development across the two groups produced rather equivalent results, with no clear indicator that Gamification was either beneficial or prejudicial to it. In a short test like this, that’s rather expected. What’s unexpected was the difference in the initial step for the problem found in the GF group, but it might be completely unrelated to the Gamification features being present. This was most likely caused by other factors, such as the hypothesis of the NG group being more comfortable with the technology, as the different aspects of that deployment didn’t seem to impact the rest of the development.

7.4.6 About the Framework

This section analyses the “About the Framework” component of the post-experiment question- naires and references the data found in Table 7.6. The goal of these questions was to perceive if there were any differences in the students’ knowledge intake about the framework across the two environments.

No Gamification Gamification 1 2 3 4 5x ¯ σ 1 2 3 4 5x ¯ σ AF1.1 0 0 1 3 4 4,38 0,70 0 0 4 0 2 3,67 0,94 AF1.2 0 0 3 3 2 3,88 0,78 0 1 2 2 1 3,50 0,96 AF1.3 3 0 5 0 0 2,25 0,97 2 1 2 0 1 2,50 1,38 AF1.4 0 0 4 2 2 3,75 0,83 0 0 2 3 1 3,83 0,69 AF1.5 1 1 4 0 2 3,13 1,27 0 0 5 0 1 3,33 0,75 AF1.6 1 0 5 0 2 3,25 1,20 0 0 5 0 1 3,33 0,75 AF1.7 0 0 5 2 1 3,50 0,71 0 0 4 0 2 3,67 0,94 AF1.8 0 0 4 2 2 3,75 0,83 0 0 3 1 2 3,83 0,90 Table 7.6: Statistics of the “About the Framework” part of the post-experiment questionnaire

70 Validation Experiment

Unlike previous sections, the comparison here happens in a different way. First, it must be considered that all of these questions had a “correct” answer, making them behave as some sort of true and false questionnaire. Each point in the group declares a statement, which can be a valid fact about the framework or not, and the subject had to answer it in the same terms of using a scale, where they would indicate if they agreed or not with the proposed statement. For example if the question declared that “Water is wet” (and for the sake of any potential disagreements, its assumed here that, yes, it really is wet), in the case the student answered with 5 (Strongly Agree), this would point towards correctness, in contrast, 1 (Strongly Disagree) would point into incorrectness. But the answers students could provide weren’t in the binary scope of true and false, there was a neutral value of 3, which could be interpreted as “i’m not sure” or “i don’t want to answer”, and smaller strength values of 2 and 4 for disagreement and agreement respectively. So in this scenario, the actual answer given by a student does not only work in ascertaining if they understood the contents correctly, but also their confidence in the answers given. So, to evaluate these results, it was used the distance from the intended answer to question. Say one of the points had a “true” statement, the intended result would be 5 and the distance from the subject’s submission would be calculated as 5 − S, where “S” would be the value of the provided answer. In this case, answering with “Strongly Agree” would compute a distance of 0, a neutral answer would compute 2 and “Strongly Disagree” would compute 4. In contrast, if the intended result would be 1, then the calculation would be symmetrical, with S − 1, producing the same range of distance values.

AF1.1 AF1.2 AF1.3 AF1.4 AF1.5 AF1.6 AF1.7 AF1.8 x¯ σ No Gamification 0,63 1,13 1,25 1,25 2,13 1,75 1,50 1,25 1,36 0,42 Gamification 1,33 1,50 1,50 1,17 2,33 1,67 1,33 1,17 1,50 0,35

Table 7.7: Distances from the correct answers in the “About the Framework” questions

Table 7.7 tallies both the average distances to each correct answer across the two groups and the average of those altogether. From the 8 questions provided, the NG group scored better in 4 of them, and the GF group in the other 4, with the only reasonably distinguishable averages happening on AF1.1, which is most likely a consequence of the difficulties the GF group had with the beginning of the development process, as suggested in the previous section (7.4.5), but overall, they compare pretty similarly. The total averages also support this equivalence as they aren’t far from each other. To sum up this section, the conclusions tell that there wasn’t a significant difference across the knowledge intake across the two groups, which, again, is expected from a short term experiment. Still the results are ever so slightly inclined to favor the NG group, but likewise to the previous sec- tions, this might be supported by the hypothesis that this group had individuals more comfortable with the technologies being used.

71 Validation Experiment

7.4.7 About the Gamification Features

Exclusive to the post-experiment questionnaire attributed to the students using the Gamification enabled deployment of DRIVER 2.0, was a section asking for feedback on said features. The results were tallied in Table 7.8 and the next paragraphs take a look into the conclusions coming from them. 1 2 3 4 5x ¯ σ AG1.1 0 3 2 1 0 2,67 0,75 AG1.2 0 0 0 4 2 4,33 0,47 AG1.3 0 1 2 3 0 3,33 0,75 AG1.4 0 0 3 2 1 3,67 0,75 AG1.5 0 0 3 1 2 3,83 0,90 AG1.6 0 1 3 1 1 3,33 0,94 AG1.7 2 1 2 1 0 2,33 1,11 AG1.8 0 2 1 3 0 3,17 0,90 AG1.9 0 0 2 3 1 3,83 0,69 Table 7.8: Statistics of the “Gamification Features” part of the post-experiment questionnaire

AG1.1 - I find these features useful

The general tone of the answers, was that the students didn’t find the Gamification features partic- ularly useful. This is somewhat expected, as they don’t really produce anything that can be directly perceived as advantageous in one’s work, which is per definition, something more “serious” than games.

AG1.2 - DRIVER wouldn’t be as appealing if it didn’t have these features

Most of the students seem to agree that DRIVER wouldn’t be as appealing without these features. This can be seen in two ways. The first, is that Gamification succeeded in creating appeal for the student’s to use the platform, as intended. The other way, can approach a negative view and say, everything else but the Gamification wasn’t interesting. Hopefully not the case.

AG1.3 - I had fun interacting with these features

The results here don’t seem to point towards the features being particularly fun or enjoyable, but given the short term experiment, most students did not focus on exploring them anyway and one’s definition of “fun” is rather relative. Still, in a general sense, the features didn’t cause seem to cause unamusement either.

AG1.4 - These features need improvement

There’s a slight tendency within the subjects to point towards the features needing improvement, but nothing major like indicating the features were badly executed or anything. Most likely the

72 Validation Experiment feeling of “improvement” moves towards expanding what the platform is capable of, in the regard of Gamification.

AG1.5 - DRIVER needs more of these features

Like in the previous question, some students pointed towards the platform needing more Gami- fication features. This goes on par with the suggestion made that the improvements referred to expanding the tool’s capabilities in this department.

AG1.6 - These features were beneficial in completing the proposed problem

The general feeling is that these features didn’t impact the problem’s resolution.

AG1.7 - These features made me want to create and share paths with others

In a bit of disappointment, the results here pointed towards Gamification not creating motivation to use the Learning Path features. It’s a somewhat understandable conclusion, as the short termed experiment didn’t lease a lot of time for the students to focus on exploring the consequences of using these features. Gamification didn’t help, but there probably wasn’t much desire to make use of paths to begin with, as it would consume the time available in order to get acquainted with the feature.

AG1.8 - I think these features motivate me to learn more about frameworks

There’s a slight division of results in this answer, but overall they point towards the Gamification features not having an impact on the motivation to learn about frameworks.

AG1.9 - I think these features motivate me to keep using DRIVER

In contrast to the previous question, here Gamification seems to have caused a positive impact in motivating the subjects to keep using DRIVER, which is a desirable result.

About the Gamification Features : Summary

In general, Gamification in DRIVER makes the platform more desirable to use, and the subjects considered it to be part of the tool’s charm. It wouldn’t be as appealing without the existence of said features and some students even pointed their interest into these being further developed. As for the impact on the development task, the subjects felt mostly indifferent to their presence, as they didn’t really change their motivation towards learning about frameworks, neither did they promote the student’s interest in using Learning Paths.

73 Validation Experiment

7.5 Threats to Validation

Validation seeks to obtain results in order to provide scientific evidence towards the discussion of the topics and hypothesis surrounding a project. With this in mind, its important to consider the threats that may distort or bias the conclusions taken from the collected results. In respect to the performed experiment, it was designed around minimizing these sort of threats, but unpre- dictability dictates that there would still be occurrences of them and that they should be analyzed to evaluate the credibility of the results. This section takes a look at the validation threats that were expected to possibly impact the experiment, and reviews them to see if they have been adequately discarded or not.

Misunderstanding of given tasks

A possible threat was that the students could have possibly misunderstood the intended outcome of the tasks, due to inexperience with the introduced concepts or possible confusing description of the problem’s objectives. This was discarded by asserting the presence of the required basic experience in the pre-experiment quiz and by allowing the subjects to inquiry supervisors in case they had doubts about what the requested objectives should produce.

Lack of motivation by the students

Because there wasn’t any compensation for the students participating in the experiment, it was possible the lack of motivation could hinder the outcome. Item EF1.2 of the post-experiment quiz (“I enjoyed programming and developing in the experiment”) aimed to discard this threat. This was the case for the results that were tallied into the analysis. As for the pair of students that suffered from this symptom, it was deemed viable to exclude them from the experiment’s results, as it was an unique case.

Free access to the internet

While the intention was that students restricted their learning process to DRIVER only, free inter- net access wasn’t barred from them. This could mean some of them would possibly use outside information. This was discarded by constant monitoring of the participants’ behavior.

Inter-pair competition

In an experiment involving pairs, where they could’ve potentially never worked together before, it was expected that there might’ve been cases where the match up wasn’t graceful. This was ruled out item EF1.3 of the post-experiment quiz (“I would work with my partner for this experiment again.”).

74 Validation Experiment

Unbalanced groups (in terms of ability)

One of the major concerns, was that the distribution of students across the two groups would weight them in terms of ability and experience more in one than the other, making the experiment’s results unreliable. Discovering if this was the case, was the main objective of the pre-experiment quiz. Such threat would be detrimental to the results related to the development process and framework knowledge intake of the post-experiment quiz. The results from the background questionnaire, statistically ruled out this possibility, but it was previously suggested that the “No Gamification” group was potentially more comfortable with the technologies required, so this threat wasn’t completely ruled out. However, as the subsequent results didn’t point to any major differences in that department across the two groups, and the perceived scores for the depending categories of the post-experiment quiz are relatively propor- tional to the hypothesised comfort the NG group had, this threat wasn’t deemed to have caused a significant impact on the experiment’s outcome.

7.6 Summary

This chapter documents an empirical study that performed a validation experiment seeking to discover if Gamification was beneficial to DRIVER 2.0 or not. The test was performed by two groups of students, one using a version of the platform with the Gamification mechanics enabled and the other with them disabled. It followed a protocol in order to minimize disturbances to the collected results. Given the available time and resources, this was the experiment that could’ve have been per- formed, but it wasn’t deemed ideal, as the Gamification features implemented, favored long term use of the tool (over weeks), while what was performed was a short term activity (over 2 hours). The scope of the experiment was sort of incomplete as well, as the redesign component prompted for a comparison between features of the original and newer versions of the platform, albeit as- serting the results of the implementation of Gamification being an higher order objective. As far as the results go, there’s no significant indications supporting the addition of Gamifica- tion to DRIVER as beneficial, coming from this experiment. In fact, the test’s outcomes from the two different deployments of the tool point towards relatively equivalent results, which was rather expected from its short termed nature. There’s a few traces among the statistics, covering the hy- pothesis that Gamification made users more satisfied and appealed towards using the platform, but nothing going into improving the developer’s performance of learning and using frameworks, or making use of the platform’s features.

75 Validation Experiment

76 Chapter 8

Conclusions and Future Work

This chapter provides a summary of the conclusions and results derived from this dissertation project and takes a look into possible future work in order to improve the products coming from its development.

8.1 Conclusions

This dissertation work produced a new version of the DRIVER platform that integrates Gamifi- cation mechanics and attempts to solve the previous iteration’s usability issues, hoping to make it more appealing to its users and reinforce the developer’s performance in their efforts for mas- tering the use of frameworks. The primary focus was to assert the effects of adding Gamification elements to this learning environment, and it was believed they would be beneficial. In paral- lel, the redesign of the platform’s base features, aimed to clear out potential issues hindering the integration of the new game-based mechanics and making user interaction simpler. Gamification has a lot of potential, but if it translated into the new DRIVER remains a bit of a mystery. The performed testing didn’t reveal any significant differences when comparing subjects using two distinct versions of of the new tool, one with the Gamification features enabled, and one without them. The addition of these new concepts didn’t seem to cause an impact on the development of the objectives set for the created validation experiment. The expectation was that there would be an increased satisfaction in using the platform by the developers, potentially driving them into learning more and better, but there’s only but a few signs suggesting Gamification has made the task more appealing. However, these results were expected for the testing that happened, as time dictated that the validation that could be performed during the dissertation period would only go through a short term experiment using students, whilst conditioning out the need to also evaluate the redesign process of the platform as well. While not object of this dissertation’s goals, this raised conclusions on how the appropriate testing of the conceived hypothesis’ is a very important factor.

77 Conclusions and Future Work

Regardless, its still believed Gamification can bring those desired benefits to DRIVER, in the long term. As previously suggested, there were some signs pointing towards it and some of the subjects who participated in the validation experiment, manifested the interest in seeing the concept more developed and actually considered its presence enjoyable, while there were no particular signs of Gamification causing undesirable effects. As for comparing the new tool against the previous one, in terms of user experience, unfortu- nately, there wasn’t any collection of data to support that the new platform solves the suggested usability issues, but overall, the new features take less effort to interact with and its effect is more noticeable than before.

8.2 Future Work

Here follows a list of considered important steps to take in the future in improving the results coming from this dissertation project:

• Extended and more adequate testing: As previously suggested, testing performed as part of this dissertation’s development, wasn’t deemed sufficient to support the hypothesized impact of creating the new platform. Thus more adequate testing towards validating the consequences of integrating Gamification into DRIVER and asserting if the feature redesign was beneficial or not is desirable and needed.

• Expand the Gamification features: Not before better determining if Gamification is a good addition or an unnecessary distraction, expanding the capabilities of the game-based features included in DRIVER 2.0 is something to look forward to, like some of the test subjects suggested.

• Improve the recommendation and search system: Recommendation is currently per- formed on a simple “from-to” basis, and searching is quite rudimentary because it relies on naive full text search. More advanced mechanisms for executing these operations are desirable to be added in the future.

• Improve the behavior of Learning Path versioning: Right now, when a documentation page is edited in DRIVER 2.0, the result is somewhat treated as whole new page. Existing paths won’t be updated to reflect these changes and can potentially grow stale in terms of knowledge over time. Therefore improving the path and page relationship in terms of versioning is one of the concerns going into posterior development.

• Overall refinement of the developed platform: The new version of the tool doesn’t come without its own issues, like some interface components being too large, or it lacking some desirable backstage features. Therefore one of the next steps would be refining the resulting software product of this dissertation.

78 References

[Bli] Blizzard Entertainment. World of Warcraft. Available at https:// worldofwarcraft.com/, last accessed 2019-07-04.

[Cho14] Yu-kai Chou. Gamification to improve our world. Presented at TEDxLausanne, 2014. Available at https://youtu.be/v5Qjuegtiyc, last acessed 2019-06-22, 2014.

[Dar] Darren Whitlen and other KiwiIRC contributors. KiwiIRC - The webIRC client. Available at https://kiwiirc.com/, last accessed 2020-01-19.

[FA] Nuno Flores and Ademar Aguiar. DRIVER. Available at https://bit.ly/ driverTool, last accessed 2020-01-23.

[FA16] Nuno Flores and Ademar Aguiar. DRIVER - A platform for collaborative framework understanding. In Proceedings - 2015 30th IEEE/ACM International Conference on Au- tomated Software Engineering, ASE 2015, 2016.

[Faca] Facebook Inc. Create React App. Available at https://create-react-app.dev/, last accessed 2020-01-19.

[Facb] Facebook Inc. React - A Javascript library for building user interfaces. Available at https://reactjs.org/, last accessed 2019-06-22.

[Flo12] Nuno Flores. Patterns and Tools for Improving Framework Understanding: a Collabora- tive Approach. PhD thesis, Faculty of Engineering, University of Porto, 2012.

[GD] Andreas Gohr and DokuWiki Community. DokuWiki. Available at https://www. dokuwiki.org/, last accessed 2019-06-22.

[Git] GitHub Inc. GitHub Flavored Markdown Spec. Available at https://github. github.com/gfm/, last accessed 2020-01-19.

[Gooa] Google LLC. Chrome Web Browser. Available at https://www.google.com/ intl/en_us/chrome/, last accessed 2020-01-18.

[Goob] Google LLC. Google Search. Available at https://google.com/, last accessed 2019-07-03.

[Gru08] Tom Gruber. Collective knowledge systems: Where the Social Web meets the Semantic Web. Web Semantics, 2008.

[Han10] Eric Hand. FOLDIT. Nature, 2010.

[HW99] M. Hollander and D. A. Wolfe. Nonparametric statistical methods. Wiley-Interscience, 1999.

79 REFERENCES

[Ins] InspIRCd Development Team. InspIRCd - The Stable, High-Performance and Modular IRCd. Available at http://www.inspircd.org/, last accessed 2020-01-19.

[Kap12] Karl M. Kapp. The Gamification of Learning and Instruction: Game-based Methods and Strategies for Training and Education. Pfeiffer, 2012.

[Lik32] R. Likert. A technique for the measurement of attitudes. Archives of Psychology, 1932.

[MI] Mozilla and Individual Contributors. MDN web docs. Available at https:// developer.mozilla.org/en-US/, last accessed 2019-06-24.

[Moz] Mozilla Corporation. Firefox Web Browser. Available at https://www.mozilla. org/en-US/firefox/new/, last accessed 2020-01-18.

[Mut] Paul Mutton. PircBot Java IRC API. Available at http://www.jibble.org/ pircbot.php, last accessed 2020-01-18.

[Nod] Node.js Foundation, Strongloop, IBM and other expressjs.com contributors. Express - Node.js web application framework. Available at https://expressjs.com/, last accessed 2020-01-19.

[Ope] OpenJS Foundation. Node.js. Available at https://nodejs.org/en/about/, last accessed 2020-01-19.

[Sta] Stack Exchange Inc. Stack Overflow. Available at https://stackoverflow.com/, last accessed 2019-07-03.

[Thea] The Freedesktop.org project. D-Bus. Available at https://www.freedesktop. org/wiki/Software/dbus/, last accessed 2020-01-10.

[Theb] The PostgreSQL Global Development Group. PostgreSQL: The World’s Most Advanced Open Source Relational Database. Available at https://www.postgresql.org/ about/, last accessed 2020-01-19.

[vH] Dimitri van Heesch. Doxygen. Available at http://www.doxygen.nl/index. html, last accessed 2019-06-22.

[Web] Webpack Contributers. Webpack Module Bundler. Available at https://webpack. js.org/, last accessed 2020-01-19.

[WeW] WeWantToKnow AS. Dragonbox Math Apps. Available at https://dragonbox. com/, last accessed 2019-06-24.

80 Appendix A

Pre-experiment Questionnaire

Here follows a copy of the Pre-experiment Questionnaire all test subjects were given during the validation experiment performed.

81

Empirical Studies in Software Engineering TENFOGS09

December 2019

Pre-experiment Questionnaire

Before starting the experiment, we ask you to take a minute and answer this brief questionnaire to ascertain your profile and background, so that the final results can be effectively interpreted and analysed. Thank you.

The questionnaire is divided into sections with questions. Each question has an identifier (for easy processing later on) and may have either a single answer, or a list of possible answers. Each answer should be rated as follows: ​1 (Strongly Disagree), 2 (Somewhat Disagree), 3 (Neither Agree nor Disagree), 4 (Somewhat Agree), 5 (Strongly Agree)​.Your should rate with an ‘​X​’ every answer as best it resembles your opinion as possible.

Group ID: ______

Questionnaire

Background

I have considerable experience...

1 2 3 4 5

BG1.1​ ...using software frameworks.

BG1.2​ ...using the Java programming language.

BG1.3​ ...using 3rd party libraries within my Java applications.

BG1.4​ ...using Integrated Development Environments (IDE). (Examples: Netbeans or Eclipse).

BG1.5​ ...using Internet Relay Chat services (IRC).

BG1.6​ ...developing IRC bots.

BG1.7​ ...developing chat bots (for any kind of chat service).

1 2 3 4 5

BG2.1 ​I’ve never used or had contact with the PircBot framework.

Thank you for your time. Appendix B

Tasks to be solved using the framework

Here follows a printable copy of the tasks all test subjects had to perform using the framework during the validation experiment.

83 DRIVER 2 Challenge : Problem

Using the the PircBot Framework and the details provided on DRIVER to connect to the IRC server, create a Java program that implements an IRC bot that performs the following tasks:

#channel refers to the test channel assigned for testing your bot in the problem. ​ Format: refers to the expected output format of the message the bot sends. ​ 1) When the bot starts, it connects to the IRC server and joins your #channel.

2) Whenever someone says 'time' in the #channel, the bot replies with a message saying the current time.

Format: (sender): The time is now (time)

Example: Tony: The is now Sat 1 Dec 20:00:00

2.1) Update the answer to the 'time' command, to instead display current time without saying the sender's name, but now with the whole text in bold and the time part in blue (actual color may vary) ​ Format: The time is now (time) ​ ​ Example: The time is now Sat 1 Dec 20:00:00 ​ ​ 3) Whenever the bot joins the #channel, it should send a message saying 'Hello, i'm a bot!'. Make sure the message is only printed when the bot joins the #channel and not other users.

Format: Hello, i'm a bot!

4) Whenever someone sends an action message in the #channel saying 'throws a bucket of water', the bot should reply with an action message saying 'dodges!'.Note that action messages are not the same as regular messages.

Tip: You can send an action message to the channel by typing for example: '/me throws a bucket of water'

Format: dodges! ​

5) Whenever someone types 'please leave' in the #channel, the bot should part from the #channel. The bot should include a reason when leaving the #channel saying 'I was told to!'.

Format: I was told to!

6) Whenever someone sends a private message to the bot saying 'please come back', the bot should (re)join your #channel. The bot should also reply in your direct message saying 'Alright!'.

Tip: To send a direct message type '/msg botname text' for example, '/msg tonybot please come back'

Format: Alright!

7) Whenever someone types 'prefixes' in the #channel, the bot should reply with a single message listing all the users in the #channel that have a prefix in their name. The message should begin with 'List of users with prefixes: ', users should be separated by spaces, and the message should include their prefixes along with their usernames. If there are no users with prefixes, the bot can just reply with 'List of users with prefixes: ' followed by nothing, for simplicity.

Tip: A prefix is the symbol before a user's name, it indicates their role in the channel. Tip: The first user to join a channel gets the '@' prefix. If you've lost this prefix, simply disconnect your bot and leave and rejoin your #channel in the IRC client to get it back. Format: List of users with prefixes: ?user1 ?user2 ...

Examples: If the users in the #channel are: {@Tony, Jarvis, @Supervisor} then we get: List of users with prefixes: @Tony @Supervisor

When no users match we simply get: List of users with prefixes: Tasks to be solved using the framework

86 Appendix

Post-experiment Questionnaire

Here follows a copy of the Post-experiment Questionnaire given to the test subjects attributed to the group using DRIVER 2.0 without Gamification features during the validation experiment.

87

Empirical Studies in Software Engineering TENFOGS09

December 2019

Post-experiment Questionnaire

Thank you for participating in this experiment. We now ask you to take a deep breath, relax and try to answer this brief questionnaire that won’t take you more than 5 minutes.

Each question relates to issues regarding your perception about the experiment. The questionnaire is divided into sections with questions. Each question has an identifier (for easy processing later on) and may have either a single answer, or a list of possible answers. Each answer should be rated as follows: ​1 (Strongly Disagree), 2 (Somewhat Disagree), 3 (Neither Agree nor Disagree), 4 (Somewhat Agree), 5 (Strongly Agree)​.Your should rate with an ‘​X​’ every answer as best it resembles your opinion as possible.

Group ID: ______Time spent solving the problem: ______seconds ​

Questionnaire

External Factors 1 2 3 4 5

EF1.1​ I found the whole experience environment intimidating.

EF1.2​ I enjoyed programming and developing in the experiment.

EF1.3​ I would work with my partner for this experiment again.

EF1.4​ I kept getting distracted by other colleagues outside my group.

Overall Satisfaction 1 2 3 4 5

OS1.1​ Overall, the setup of this experiment was suitable for solving every task presented.

OS1.2​ I found the documentation available to be sufficient

OS1.3​ I felt the need to have access to more information on how to use the framework from sources outside of DRIVER.

OS1.4​ I Despite my experience, the tools available delayed my work considerably in using the PircBot framework to solve the problem proposed

OS1.5 ​I had fun and enjoyed using DRIVER.

OS1.6​ I would recommend DRIVER to a colleague or acquaintance.

OS1.7​ I would use DRIVER again.

Development Process

It was hard to find out how to use the framework to complete the objective number ​N​ of the problem…

1 2 3 4 5

DP1.1​ Objective 1) Connect to the irc server and join the channel

DP1.2​ Objective 2) Reply with the current time on command.

DP1.3​ Objective 2.1) Update the style of the reply from Objective 2).

DP1.4​ Objective 3) Have the bot say a greeting when it joins a channel.

DP1.5 ​Objective 4) Have the bot react to an action message with another.

DP1.6​ Objective 5) Have the bot leave the channel on command.

DP1.7​ Objective 6) Have the bot join the channel on command.

DP1.8 ​Objective 7) Have the bot display a list of users with prefixes.

About the PircBot Framework 1 2 3 4 5

AF1.1​ PircBot is a Java framework for communicating and interacting with Internet Relay Chat (IRC) servers.

AF1.2​ To use the PircBot framework one must extend its current features

AF1.3​ Bots created with PircBot can’t join multiple channels at a time.

AF1.4​ PircBot is an event driven framework.

AF1.5 ​PircBot uses a different method to send messages to channels and directly to users.

AF1.6​ PircBot can print a username and its prefix by calling a single method.

AF1.7​ PircBot has dedicated exceptions to throw IRC related errors.

AF1.8​ PircBot can be used to create an interactive IRC client.

If you wish to leave any further comments, please use the following space:

______

______

______

______

Thank you for your time. Post-experiment Questionnaire

90 Appendix D

Post-experiment Questionnaire (w/ Gamification)

Here follows a copy of the Post-experiment Questionnaire given to the test subjects attributed to the group using DRIVER 2.0 with Gamification features during the validation experiment.

91

Empirical Studies in Software Engineering TENFOGS09

December 2019

Post-experiment Questionnaire

Thank you for participating in this experiment. We now ask you to take a deep breath, relax and try to answer this brief questionnaire that won’t take you more than 5 minutes.

Each question relates to issues regarding your perception about the experiment. The questionnaire is divided into sections with questions. Each question has an identifier (for easy processing later on) and may have either a single answer, or a list of possible answers. Each answer should be rated as follows: ​1 (Strongly Disagree), 2 (Somewhat Disagree), 3 (Neither Agree nor Disagree), 4 (Somewhat Agree), 5 (Strongly Agree)​.Your should rate with an ‘​X​’ every answer as best it resembles your opinion as possible.

Group ID: ______Time spent solving the problem: ______seconds ​

Questionnaire

External Factors 1 2 3 4 5

EF1.1​ I found the whole experience environment intimidating.

EF1.2​ I enjoyed programming and developing in the experiment.

EF1.3​ I would work with my partner for this experiment again.

EF1.4​ I kept getting distracted by other colleagues outside my group.

Overall Satisfaction 1 2 3 4 5

OS1.1​ Overall, the setup of this experiment was suitable for solving every task presented.

OS1.2​ I found the documentation available to be sufficient

OS1.3​ I felt the need to have access to more information on how to use the framework from sources outside of DRIVER.

OS1.4​ I Despite my experience, the tools available delayed my work considerably in using the PircBot framework to solve the problem proposed

OS1.5 ​I had fun and enjoyed using DRIVER.

OS1.6​ I would recommend DRIVER to a colleague or acquaintance.

OS1.7​ I would use DRIVER again.

Development Process

It was hard to find out how to use the framework to complete the objective number ​N​ of the problem…

1 2 3 4 5

DP1.1​ Objective 1) Connect to the irc server and join the channel

DP1.2​ Objective 2) Reply with the current time on command.

DP1.3​ Objective 2.1) Update the style of the reply from Objective 2).

DP1.4​ Objective 3) Have the bot say a greeting when it joins a channel.

DP1.5 ​Objective 4) Have the bot react to an action message with another.

DP1.6​ Objective 5) Have the bot leave the channel on command.

DP1.7​ Objective 6) Have the bot join the channel on command.

DP1.8 ​Objective 7) Have the bot display a list of users with prefixes.

About the PircBot Framework

1 2 3 4 5

AF1.1​ PircBot is a Java framework for communicating and interacting with Internet Relay Chat (IRC) servers.

AF1.2​ To use the PircBot framework one must extend its current features

AF1.3​ Bots created with PircBot can’t join multiple channels at a time.

AF1.4​ PircBot is an event driven framework.

AF1.5 ​PircBot uses a different method to send messages to channels and directly to users.

AF1.6​ PircBot can print a username and its prefix by calling a single method.

AF1.7​ PircBot has dedicated exceptions to throw IRC related errors.

AF1.8​ PircBot can be used to create an interactive IRC client.

About DRIVER’s Gamification Features

DRIVER includes a series of features as part of its gamification effort. These include a customizable mascot/avatar, an experience point system and activities you can do to earn experience and unlock customization options for your mascot/avatar. Regarding these features...

1 2 3 4 5

AG1.1​ I find these features useful.

AG1.2​ DRIVER wouldn’t be as appealing if it didn’t have these features.

AG1.3​ I had fun interacting with these features.

AG1.4​ These features need improvement.

AG1.5 ​DRIVER needs more of these features.

AG1.6​ These features were beneficial in completing the proposed problem.

AG1.7 ​These features made me want to create and share paths with others.

AG1.8​ I think these features motivate me to learn more about frameworks.

AG1.9​ I think these features motivate me to keep using DRIVER.

If you wish to leave any further comments, please use the following space:

______

______

______

______

______

______

Thank you for your time. Appendix E

Table of Questionnaire acquired results

Here follows a copy of the tallied validation experiment results through the performed question- naires, including the subjects that were discarded from the analysis.

95 Bug Ghost Flying Ice Rock Ground Water Fire Code Questions A B A B A B A B A B A B A B A B Gamification Enabled No No No No No No No No Yes Yes Yes Yes Yes Yes Yes Yes

Pre-experiment Questionnaire Background I have considerable experience... BG1.1 ...using software frameworks. 3 1 3 4 2 1 3 3 2 3 1 1 3 4 3 2 BG1.2 ...using the Java programming language. 4 4 4 3 3 3 4 4 3 3 3 2 4 4 4 3 ...using 3rd party libraries within my Java BG1.3 4 1 4 3 3 2 3 3 3 3 1 1 3 4 3 2 applications. ...using Integrated Development Environments BG1.4 5 5 5 4 4 3 4 4 4 4 4 4 3 4 4 3 (IDE). (Examples: Netbeans or Eclipse). BG1.5 ...using Internet Relay Chat services (IRC). 3 1 3 1 1 1 1 1 1 1 1 1 1 3 2 1 BG1.6 ...developing IRC bots. 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 ...developing chat bots (for any kind of chat BG1.7 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 service).

I’ve never used or had contact with the PircBot BG2.1 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 framework. Colors just for visual matching here Colors just for visual matching here

Post-experiment Questionnaire Time spent solving the problem (in seconds) [TO = TO TO TO TO TO TO 2389 2389 TO TO TO TO TO TO TO TO Timeout / Full Time] Groups that indicated time but didn't solve all Note: problems are considered a TO External Factors I found the whole experience environment EF1.1 2 3 4 4 3 4 2 2 1 2 2 4 2 1 2 2 intimidating. (Disagree is better) I enjoyed programming and developing in the EF1.2 4 4 4 4 5 4 4 4 4 4 5 4 4 4 2 2 experiment. I would work with my partner for this EF1.3 4 5 3 5 5 4 4 4 5 5 5 5 4 4 3 3 experiment again. I kept getting distracted by other colleagues EF1.4 1 1 3 3 1 4 1 1 1 1 2 4 1 1 3 3 outside my group. (Disagree is better) Green = Positive Influence Red = Negative Influence Overall Satisfaction Overall, the setup of this experiment was OS1.1 4 2 4 3 4 4 5 5 4 4 3 4 3 4 3 3 suitable for solving every task presented. I found the documentation available to be OS1.2 3 2 2 2 5 4 5 5 4 4 2 2 2 5 3 3 sufficient I felt the need to have access to more information on how to use the framework from OS1.3 3 4 4 5 4 4 2 1 1 2 2 1 4 1 3 3 sources outside of DRIVER. (Disagree is Better) I Despite my experience, the tools available delayed my work considerably in using the OS1.4 2 1 2 3 2 2 2 2 2 3 1 1 3 1 3 3 PircBot framework to solve the problem proposed (Disagree is better) OS1.5 I had fun and enjoyed using DRIVER. 4 3 4 5 5 5 3 4 4 4 5 4 4 4 2 2 I would recommend DRIVER to a colleague or OS1.6 4 2 4 4 4 4 3 4 3 4 5 4 3 3 3 3 acquaintance. OS1.7 I would use DRIVER again. 5 2 5 4 5 4 3 4 4 5 5 5 3 3 3 2 Green = Positive Satisfaction Red = Negative Satisfaction Development Process It was hard to find out how to use the framework to complete the objective number N of the problem… (Disagree is better) Objective 1) Connect to the irc server and join DP1.1 2 2 2 2 2 2 1 1 4 3 2 3 3 1 5 5 the channel Objective 2) Reply with the current time on DP1.2 2 1 2 2 2 2 1 1 2 3 1 1 2 2 command. Objective 2.1) Update the style of the reply DP1.3 4 4 2 2 1 1 2 3 2 1 from Objective 2). Objective 3) Have the bot say a greeting when DP1.4 3 3 2 2 1 1 3 3 2 1 it joins a channel. Objective 4) Have the bot react to an action DP1.5 2 2 2 1 3 1 2 1 message with another. Objective 5) Have the bot leave the channel on DP1.6 1 1 2 1 command. Objective 6) Have the bot join the channel on DP1.7 2 1 2 1 command. Objective 7) Have the bot display a list of users DP1.8 2 1 with prefixes. Bug Ghost Flying Ice Rock Ground Water Fire Code Questions A B A B A B A B A B A B A B A B Gamification Enabled No No No No No No No No Yes Yes Yes Yes Yes Yes Yes Yes Solved Total: 3 3 3 3 5 5 8 8 2 2 5 5 7 7 1 1 Green = Not hard to find out information Red = Hard to find out information About the PircBot Framework PircBot is a Java framework for communicating AF1.1 and interacting with Internet Relay Chat (IRC) 4 4 4 5 5 5 3 5 3 3 5 5 3 3 servers. (Agree) To use the PircBot framework one must extend AF1.2 4 4 3 4 5 5 3 3 3 3 4 5 4 2 its current features (Agree) Bots created with PircBot can’t join multiple AF1.3 3 3 1 1 3 3 3 1 3 5 1 1 2 3 channels at a time. (Disagree) AF1.4 PircBot is an event driven framework. (Agree) 5 3 4 5 3 3 3 4 3 4 5 4 3 4 PircBot uses a different method to send AF1.5 messages to channels and directly to users. 5 3 5 3 3 3 2 1 3 3 5 3 3 3 (Disagree) PircBot can print a username and its prefix by AF1.6 5 3 3 3 5 3 3 1 3 3 3 5 3 3 calling a single method. (Agree) PircBot has dedicated exceptions to throw IRC AF1.7 5 3 4 3 3 3 3 4 3 3 5 5 3 3 related errors. (Agree) PircBot can be used to create an interactive AF1.8 5 3 5 3 4 3 3 4 3 3 5 5 3 4 IRC client. (Agree) Green = Correct Answer Red = Incorrect Answer

About DRIVER’s Gamification Features AG1.1 I find these features useful. 3 3 2 4 2 2 3 3 DRIVER wouldn’t be as appealing if it didn’t AG1.2 4 4 5 5 4 4 3 3 have these features. AG1.3 I had fun interacting with these features. 4 3 4 4 3 2 3 2 These features need improvement. (Disagree AG1.4 3 4 4 3 5 3 3 2 is Better) AG1.5 DRIVER needs more of these features. 4 3 3 5 5 3 3 3 Not Applicable These features were beneficial in completing AG1.6 2 3 3 5 3 4 2 3 the proposed problem. These features made me want to create and AG1.7 4 3 2 3 1 1 2 3 share paths with others. I think these features motivate me to learn AG1.8 4 4 4 3 2 2 2 1 more about frameworks. I think these features motivate me to keep AG1.9 4 4 4 5 3 3 2 3 using DRIVER.

Green = Favors Gamification / Current Red = Doesn't Favor Gamification / Current Implementation Implementation