Impact of Technological Support on the Workload of Software Prototyping

Von der Fakultät für Mathematik, Informatik und Naturwissenschaften der RWTH Aachen University zur Erlangung des akademischen Grades einer Doktorin der Naturwissenschaften genehmigte Dissertation

vorgelegt von

Sarah Suleri, M.Sc. aus Toba Tek Singh, Pakistan

Berichter: Prof. Dr. Matthias Jarke Prof. Dr. Wolfgang Prinz Prof. Dr. Ulrich J. Schröder

Tag der mündlichen Prüfung: 18.02.2021

Diese Dissertation ist auf den Internetseiten der Universitätsbibliothek verfügbar. Sarah Suleri: Impact of Technological Support on the Workload of Software Prototyping, Doctoral Dissertation, © December 2020

Eidesstattliche Erklärung

Declaration of Authorship

I, Sarah Suleri

declare that this thesis and the work presented in it are my own and has been generated by me as the result of my own original research.

Hiermit erkläre ich an Eides statt / I do solemnely swear that:

1. This work was done wholly or mainly while in candidature for the doctoral degree at this faculty and university;

2. Where any part of this thesis has previously been submitted for a degree or any other qualification at this university or any other institution, this has been clearly stated;

3. Where I have consulted the published work of others or myself, this is always clearly attributed;

4. Where I have quoted from the work of others or myself, the source is always given. This thesis is entirely my own work, with the exception of such quotations;

5. I have acknowledged all major sources of assistance;

6. Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself;

7. Parts of this work have been published before as: Suleri, S., Sermuga Pandian, V. P., Shishkovets, S., & Jarke, M. (2019, May). Eve: A sketch-based software prototyping workbench. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems.

Suleri, S., Kipi, N., Tran, L. C., & Jarke, M. (2019, October). UI Design Pattern-driven Rapid Prototyping for Agile Development of Mobile Applications. In Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services.

Suleri, S., Hajimiri, Y., & Jarke, M. (2020, October). Impact of using UI Design Patterns on the Workload of Rapid Prototyping of Smartphone Applications: An Experimental Study. In Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services. Sermuga Pandian, V. P., Suleri, S., & Jarke, M. (2020, May). Syn: Synthetic Dataset for Training UI Element Detector From Lo-Fi Sketches. In Proceedings of the 2020 IUI Conference on Intelligent User Interfaces. Sermuga Pandian, V. P., Suleri, S., Beecks C., & Jarke, M. (2020, Dec). MetaMorph: AI Assistance to Transform Lo-Fi Sketches to Higher Fidelities. In Proceedings of the 2020 OzCHI Australian Conference on Human Computer Interaction. Sermuga Pandian, V. P., Suleri, S., & Jarke, M. (2021, May). UISketch: A Large-Scale Dataset of UI Element Sketches. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems.

______

Sometimes it is the people no one can imagine anything of, who do the things no one can imagine. “ — Alan Turing

ABSTRACT

Prototyping is a broadly utilized iterative technique for brainstorming, communicating, and evaluating user interface (UI) designs. This research aims to analyze this process from three aspects: traditional UI prototyping, rapid prototyping, and prototyping for accessibility. We propose three novel approaches and realize them by introducing three artifacts: 1) Eve, a sketch-based prototyping workbench that supports automation of transforming low fidelity prototypes to higher fidelities, 2) Kiwi, a UI design pattern and guidelines library to support UI design pattern-driven prototyping, 3) Personify, a persona-based UI design guidelines library for accessible UI prototyping. We evaluate the usability of these artifacts, and the results indicate good usability and learnability. Furthermore, we use NASA-TLX to study the impact of using these three novel approaches on the subjective workload experienced by the designers during the soware prototyping process. Our workload analysis reveals that, unlike the traditional prototyping approach, Eve’s comprehensive support eliminates the need for switching between various prototyping tools while progressing through the low, medium, and high fidelity prototypes. Conse- quently, there is a significant decrease in subjective workload experienced by designers using the comprehensive approach offered by Eve. Also, there is a significant reduction in mental demand, temporal demand, effort, and five times increase in the overall perceived performance using the comprehensive approach (Eve). Similarly, the subjective workload experienced by designers using the pattern-driven approach using Kiwi is significantly less than the workload experienced using the traditional approach of rapid prototyping. Specifically, there is a significant decrease in physical demand and effort of rapid prototyping while using the pattern-driven approach. Lastly, the subjective workload experienced by UI/UX designers using the persona-driven approach offered by Personify is significantly less than the workload experienced using the traditional approach of prototyping for accessibility. Specifically, there is a significant decrease in mental demand and effort of prototyping accessible UIs while using Personify. This work aims to extend prior work on UI prototyping and is broadly applicable to understand the impact of using deep learning, UI design patterns, and personas on the workload of UI prototyping.

ix

ÜBERBLICK

Das Prototyping ist eine weit verbreitete iterative Technik für das Brainstorming, die Kommunika- tion und die Bewertung von UI-Designs. Diese Forschung zielt darauf ab, diesen Prozess unter drei Aspekten zu analysieren: traditionelles UI-Prototyping, Rapid Prototyping und Prototyping für Barrierefreiheit. Wir schlagen drei neuartige Ansätze vor und realisieren sie durch die Einführung von drei Artefakten: 1) Eve, eine skizzen-basierte Prototyping-Werkbank, die die Automatisierung der Umwandlung von Prototypen mit geringer Wiedergabetreue in höhere Wiedergabetreue un- terstützt, 2) Kiwi, eine Bibliothek mit UI-Design Patterns und Guidelines zur Unterstützung des Pattern-gesteuerten Prototypings von UI-Designs, 3) Personify, eine Persona-basierte Bibliothek mit UI-Design Guidelines für barrierefreies UI-Prototyping. Wir evaluieren die Nutzbarkeit dieser Arte- fakte, und die Ergebnisse weisen auf eine gute Nutzbarkeit und Lernfähigkeit hin. Darüber hinaus verwenden wir NASA-TLX, um die Auswirkungen der Verwendung dieser drei neuartigen Ansätze auf die subjektive Arbeitsbelastung der Designer während des Soware-Prototyping-Prozesses zu untersuchen. Unsere Analyse der Arbeitsbelastung zeigt, dass Eves umfassende Unterstützung im Gegensatz zum traditionellen Prototyping-Ansatz den Wechsel zwischen verschiedenen Prototyping-Tools über- flüssig macht, während die Prototypen mit niedriger, mittlerer und hoher Wiedergabetreue durch- laufen werden. Folglich ist die subjektive Arbeitsbelastung von Designern, die den von Eve ange- botenen umfassenden Ansatz nutzen, deutlich geringer. Auch die mentale Belastung, die zeitliche Belastung und der Arbeitsaufwand sind deutlich geringer, und die wahrgenommene Gesamtleistung steigt mit dem umfassenden Ansatz um das Fünffache (Eve). In ähnlicher Weise ist die subjektive Arbeitsbelastung von der Designer, die den Pattern-getriebenen Ansatz mit Kiwi verwenden, deutlich geringer als die Arbeitsbelastung, die mit dem traditionellen Ansatz des Rapid Prototyping verbunden ist. Insbesondere sind der physische Aufwand und die ph- ysische Beanspruchung beim Rapid Prototyping bei Verwendung des Pattern-getriebenen Ansatzes deutlich geringer als beim traditionellen Ansatz des Rapid Prototyping. Und schließlich ist die subjektive Arbeitsbelastung, die Designer mit dem Persona-getriebenen Ansatz von Personify erfahren, signifikant geringer als die Arbeitsbelastung, die mit dem tradi- tionellen Ansatz des Prototyping für die Zugänglichkeit erfahren wird. Genauer gesagt, es gibt einen signifikanten Rückgang der mentalen Anforderungen und des Aufwands für das Prototyping zugänglicher UI’s während der Verwendung von Personify. Diese Arbeit zielt darauf ab, frühere Arbeiten zum UI-Prototyping auszuweiten und ist allgemein anwendbar, um die Auswirkungen der Verwendung von tiefem Lernen, UI-Entwurfsmustern und Personas auf die Arbeitsbelastung beim UI-Prototyping zu verstehen.

xi

ACKNOWLEDGMENTS

This has been a very long and difficult journey. One that makes or breaks you. Fortunately and unfortunately for me, it did a bit of both. While there have been a lot of people who have tried to make this journey harder, there has also been a lot of love and support from others. So instead of mentioning anyone despite whom I was able to reach wherever I am today, I would like to mention a few extremely special people who have been there for me through thick and thin.

I would begin with my sister, Seemin Suleri. No one knows the hardships of this journey more than you. Thank you for making me believe that I can do anything in this world. Thank you for having my back, for being my guru, and most importantly, thank you for being there. This journey would have been a lot harder if it wasn’t for you.

The voice of reason, Waleed bin Dawood, my strength, my partner in every crime, my best friend. Walee tu na hota to mene mar thori jaana tha, pr kasme life me maza nahi ana tha. Mere jiger k totay, thank you for keeping me alive, for keeping me sane amidst this insanity.

My constant support system, Ji. It’s beyond words to explain what you mean to me. You are the only person in the world who can make a 10 hour time difference seem so seamless. Thank you for being my 3 am friend from halfway across the world, for listening to my hysterical rants, and for being there for me when I was a nervous wreck. Thank you for everything.

Most importantly, my husband, thank you for being my rock, my joy, my courage, my best friend, and much more. Thank you for being you. Naan unnai kaadhalikiraen.

It’s very HARD for me, not to mention Chuchu. How can that be! Chuchu, you made life here much more fun! Much easier! Thank you for being the Chuchu to Poms.

I would also like to mention a few names of my brilliant students who played an extremely important part in my research: Lana, Nilda, Tina, Christian, Harish, Yeganeh. I am very glad to have known and worked with such creative and hardworking people.

Lastly, this dissertation is an acknowledgment for a young, naive girl, a nobody from a small town in the middle of nowhere. It was her undying courage and strength that lead her to become someone, somewhere.

Onwards and Upwards!

S.

xiii

Contents

1 introduction 1 1.1 Thesis Statement ...... 3 1.2 Research Questions ...... 3 1.3 Scope ...... 4 1.4 Research Approach ...... 4 1.5 RQ1: How much is the workload of soware prototyping? ...... 4 1.6 RQ2: How can we support soware prototyping technologically? ...... 5 1.7 RQ3: How would this technological support to soware...... 8 1.8 Benefits ...... 9 1.9 Outline ...... 9 2 literature review 11 2.1 Soware Prototyping Tools ...... 11 2.2 Workload Analysis ...... 16 i Traditional Prototyping 3 formative study on traditional prototyping 27 3.1 Participants ...... 27 3.2 Procedure ...... 28 3.3 Analysis ...... 28 3.4 Key Findings ...... 29 3.5 Summary ...... 35 3.6 Next Steps ...... 36 4 feature list of proposed solution 37 4.1 Project Management ...... 37 4.2 Fidelities and Modes Management ...... 38 4.3 Screen Management ...... 38 4.4 User Input ...... 39 4.5 Fidelity Transformation ...... 39

xv xvi contents

4.6 Interaction Map ...... 40 4.7 Collaboration and Evaluation ...... 40 4.8 Summary ...... 41 5 eve: a comprehensive prototyping workbench 43 5.1 Projects Management ...... 43 5.2 Screen Management ...... 44 5.3 Fidelities and Modes Management ...... 44 5.4 Design ...... 45 5.5 Interaction ...... 49 5.6 Preview ...... 50 5.7 Collaboration ...... 50 5.8 Summary ...... 52 6 usability evaluation of eve 53 6.1 Study Design ...... 53 6.2 Results ...... 54 6.3 Summary ...... 60 7 workload evaluation of eve 61 7.1 Rationale for Study ...... 61 7.2 Null Hypothesis ...... 61 7.3 Participants ...... 62 7.4 Study Design ...... 62 7.5 Apparatus ...... 64 7.6 Measurements ...... 64 7.7 Task Categories ...... 65 7.8 Procedure ...... 66 7.9 Analysis ...... 67 7.10 Results & Discussion ...... 67 7.11 Summary ...... 79 8 limitations and future work 81

ii Rapid Prototyping 9 formative study on rapid prototyping 85 9.1 Participants ...... 85 9.2 Procedure ...... 86 contents xvii

9.3 Analysis ...... 86 9.4 Key Findings ...... 86 9.5 Summary ...... 88 10 ui design pattern-driven rapid prototyping 89 10.1 Background ...... 89 10.2 Proposed Approach ...... 91 10.3 Summary ...... 91 11 kiwi: ui design patterns & guidelines library 93 11.1 Collecting Patterns & Guidelines ...... 94 11.2 Documentating Patterns & Guidelines ...... 94 11.3 Validating Patterns ...... 96 11.4 Connecting Patterns & Guidelines ...... 96 11.5 Categorizing Patterns & Guidelines ...... 97 11.6 Application Types ...... 98 11.7 Summary ...... 98 12 usability evaluation of kiwi 99 12.1 Study Design ...... 99 12.2 Results ...... 100 12.3 Summary ...... 106 13 workload evaluation of kiwi 107 13.1 Rationale for Study ...... 107 13.2 Null Hypothesis ...... 107 13.3 Participants ...... 108 13.4 Study Design ...... 108 13.5 Apparatus ...... 110 13.6 Measurements ...... 110 13.7 Task Categories ...... 110 13.8 Procedure ...... 110 13.9 Analysis ...... 112 13.10Results & Discussion ...... 113 13.11Summary ...... 117 14 limitations & future work 119 xviii contents

iii Prototyping For Accessibility 15 formative study on prototyping for accessibility 123 15.1 Participants ...... 123 15.2 Procedure ...... 124 15.3 Analysis ...... 125 15.4 Key Findings ...... 125 15.5 Identified Needs ...... 128 15.6 Summary ...... 129 16 persona-based ui design guidelines 131 16.1 Personas ...... 131 16.2 Accessibilities ...... 133 16.3 UI Design Guidelines for Accessible UIs ...... 134 16.4 Proposed Approach ...... 135 16.5 Summary ...... 135 17 personify: persona-based ui design guidelines library 137 17.1 Collecting UI Design Guidelines for Accessibility ...... 138 17.2 Documentating Personas ...... 139 17.3 Validating Personas ...... 139 17.4 Categorizing Personas into Imapairments ...... 140 17.5 Connecting Personas & Guidelines ...... 141 17.6 Summary ...... 142 18 usability evaluation of personify 143 18.1 Study Design ...... 143 18.2 Results ...... 144 18.3 Summary ...... 150 19 workload evaluation of personify 151 19.1 Rationale for Study ...... 151 19.2 Null Hypothesis ...... 151 19.3 Participants ...... 152 19.4 Study Design ...... 152 19.5 Apparatus ...... 153 19.6 Measurements ...... 154 19.7 Task Categories ...... 154 19.8 Procedure ...... 154 contents xix

19.9 Analysis ...... 156 19.10Results & Discussion ...... 157 19.11Summary ...... 161 20 limitations & future work 163 21 conclusion 165 iv Appendices a ui prototyping tools review 167 b workload analysis: task categories 173 c traditional prototyping workload analysis 175 c.1 Lo-Fi Workload Analysis ...... 175 c.2 Me-Fi Workload Analysis ...... 176 c.3 Hi-Fi Workload Analysis ...... 177

bibliography 179

publications 207 List of Figures

Figure 2.1 SILK sketching and storyboarding for a weather application (Landay and Myers, 2001)...... 13 Figure 2.2 Interface widgets that SILK recognizes during sketching (le) and in the transformed interface (right) (Landay and Myers, 2001)..... 15 Figure 3.1 Demographics and prior experience of participants of the formative study on traditional prototyping...... 27 Figure 3.2 Project structure created during lo-fi prototyping, provided by S1P32 (Shishkovets, 2019)...... 31 Figure 5.1 Projects overview screen of Eve enables users to (A) view user infor- mation (B) view the list of projects (C) create a new project (D) sort the projects by last viewed, date created and project name (E) search projects by name...... 43 Figure 5.2 In Eve, creating a new project regime has three steps: (1) naming the new project and selecting the desired platform: Web, Desktop, Phone, Tablet or Smartwatch (2) choosing the OS (3) selecting the method of input ...... 44 Figure 5.3 In Eve, users can sketch their designs in design mode of lo-fi. . . . 45 Figure 5.4 In Eve, user can draw, erase, and edit a lo-fi sketch...... 45 Figure 5.5 In Eve, users can view and modify the detected UI elements in lo-fi sketches and the corresponding generated UI widgets in me-fi. . . 46 Figure 5.6 In Eve, users can view the current status of the API call in three different states: (a) inactive, (b) active (c) connection error...... 47 Figure 5.7 In Eve, users can view and further polish their UI designs in design mode of me-fi...... 48 Figure 5.8 In Eve, users can view and edit the automatically generated hi-fi design and code...... 48 Figure 5.9 In Eve, the interaction map provides an overview of all interactions 49 List of Figures xxi

Figure 5.10 Preview mode of Eve ...... 50 Figure 5.11 In Eve, user can view, reply and resolve a comment using the com- ment indicator...... 51 Figure 6.1 Demographics and prior experience of participants of the usability studyofEve...... 53 Figure 6.2 SUS mean responses for Eve ...... 54 Figure 6.3 Participants’ preference for frequency of using Eve according to SUS. 55 Figure 6.4 Participants’ perception of complexity of Eve according to SUS. . . 55 Figure 6.5 Participants’ perception of ease of using Eve according to SUS. . . . 56 Figure 6.6 Participants’ perception of need of any technical support while using Eve according to SUS...... 56 Figure 6.7 Participants’ perception of how well integrated Eve is according to SUS...... 57 Figure 6.8 Participants’ perception of design inconsistencies in Eve according toSUS...... 57 Figure 6.9 Participants’ perception of ease of learning Eve according to SUS. . 58 Figure 6.10 Participants’ perception of difficulty of using Eve according to SUS. 58 Figure 6.11 Participants’ perception of their confidence in using Eve according toSUS...... 59 Figure 6.12 Participants’ perception of the need of prior knowledge and expertise in using Eve according to SUS...... 59 Figure 7.1 Demographics and prior experience of participants of the workload study on traditional UI prototyping...... 62 Figure 7.2 Distribution of participants based on their prior experience for the workload study on traditional prototyping...... 63 Figure 7.3 Comparison of average workload experienced by participants of workload study during the entire process of prototyping using tradi- tional versus comprehensive approach (Eve)...... 68 Figure 7.4 Comparison of physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload analysis during the entire process of prototyping using traditional approach versus comprehensive approach (Eve). . . . . 69 xxii List of Figures

Figure 7.5 Comparison of average workload, physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload study during the lo-fi prototyping using traditional versus comprehensive approach (Eve)...... 73 Figure 7.6 Lo-fi designs created by participants of (a) Control group (traditional approach) (b) Experimental group (Eve) ...... 74 Figure 7.7 Comparison of average workload, physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload study during the me-fi prototyping using traditional versus comprehensive approach (Eve)...... 75 Figure 7.8 Me-fi designs created by participants of (a) Control group (traditional approach) (b) Experimental group (Eve) ...... 76 Figure 7.9 Comparison of average workload, physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload study during the hi-fi prototyping using traditional versus comprehensive approach (Eve)...... 77 Figure 7.10 Hi-fi designs created by participants of (a) Control group (traditional approach) (b) Experimental group (Eve) ...... 77 Figure 7.11 Correlation between subjective workload and UI element detection accuracy, precision, recall experienced by participants during the workload analysis...... 78 Figure 9.1 Demographics and prior experience of participants of the formative study on rapid prototyping...... 85 Figure 11.1 Kiwi, a web-based UI design patterns and guidelines library . . . . 93 Figure 11.2 Pattern description with sample GUI and layout blueprint of Product Catalog pattern ...... 95 Figure 11.3 Documenting UI design guidelines in a standard format ...... 95 Figure 11.4 Kiwi Structure ...... 96 Figure 11.5 HiFi: Pattern Overview ...... 97 Figure 11.6 HiFi: Application Type Overview ...... 98 Figure 12.1 Demographics and prior experience of participants of the usability study of Kiwi...... 99 Figure 12.2 SUS mean responses for Kiwi ...... 100 Figure 12.3 Participants’ preference for frequency of using Kiwi according to SUS. 101 List of Figures xxiii

Figure 12.4 Participants’ perception of complexity of Kiwi according to SUS. . . 101 Figure 12.5 Participants’ perception of ease of using Kiwi according to SUS. . . 102 Figure 12.6 Participants’ perception of need of any technical support while using Kiwi according to SUS...... 102 Figure 12.7 Participants’ perception of how well integrated Kiwi is according to SUS...... 103 Figure 12.8 Participants’ perception of design inconsistencies in Kiwi according toSUS...... 103 Figure 12.9 Participants’ perception of ease of learning Kiwi according to SUS. . 104 Figure 12.10 Participants’ perception of difficulty of using Kiwi according to SUS. 104 Figure 12.11 Participants’ perception of their confidence in using Kiwi according toSUS...... 105 Figure 12.12 Participants’ perception of the need of prior knowledge and expertise in using Kiwi according to SUS...... 105 Figure 13.1 Demographics and prior experience of participants of the workload study on rapid prototyping...... 108 Figure 13.2 Distribution of participants based on their prior experience for the workload study on rapid prototyping...... 109 Figure 13.3 Comparison of average workload experienced by participants of workload study during rapid prototyping using traditional versus UI design pattern-driven approach...... 113 Figure 13.4 Comparison of physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload analysis during rapid prototyping using traditional ap- proach versus UI design pattern-driven approach...... 116 Figure 15.1 Demographics and prior experience of participants of the formative study on prototyping for accessibility...... 123 Figure 16.1 Types of Accessibilities - Situation Based ...... 133 Figure 17.1 Personify, Persona-based UI Design Guidelines Library ...... 137 Figure 17.2 Personify summarizes the UI design guidelines...... 138 Figure 17.3 Personify represents all the guidelines in a graphical manner . . . 138 Figure 17.4 Personify provides a sample persona, "Ammar - the Music Teacher with Complete Blindness"...... 139 Figure 17.5 Personify graphically visualizes all the personas ...... 140 xxiv List of Figures

Figure 17.6 Personify documents impairments as a card with the name, descrip- tion and a representative image...... 140 Figure 17.7 A graph connecting color blindness with the relevant personas. . . 141 Figure 17.8 Personify visualizes each persona and the relevant UI design guide- lines in a graphical manner...... 142 Figure 18.1 Demographics and prior experience of participants of the usability study of Personify...... 143 Figure 18.2 SUS mean responses for Personify ...... 144 Figure 18.3 Participants’ preference for frequency of using Personify according toSUS...... 145 Figure 18.4 Participants’ perception of complexity of Personify according to SUS. 145 Figure 18.5 Participants’ perception of ease of using Personify according to SUS. 146 Figure 18.6 Participants’ perception of need of any technical support while using Personify according to SUS...... 146 Figure 18.7 Participants’ perception of how well integrated Personify is accord- ing to SUS...... 147 Figure 18.8 Participants’ perception of design inconsistencies in Personify ac- cording to SUS...... 147 Figure 18.9 Participants’ perception of ease of learning Personify according to SUS...... 148 Figure 18.10 Participants’ perception of difficulty of using Personify according to SUS...... 148 Figure 18.11 Participants’ perception of their confidence in using Personify ac- cording to SUS...... 149 Figure 18.12 Participants’ perception of the need of prior knowledge and expertise in using Personify according to SUS...... 149 Figure 19.1 Demographics and prior experience of participants of the workload study on accessible UI prototyping...... 152 Figure 19.2 Distribution of participants based on their prior experience for the workload study on accessible UI prototyping...... 153 Figure 19.3 Comparison of average workload experienced by participants of workload study during accessible UI prototyping using traditional versus persona-driven approach (Personify)...... 157 Figure 19.4 Comparison of physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload analysis during accessible UI prototyping using traditional approach versus persona-driven approach (Personify)...... 157

List of Tables

Table 2.1 Overview of academic prototyping tools in comparison with Eve. . 12 Table 3.1 Summary of preferred tools and techniques for project structure, UI design, collaboration and evaluation during lo-fi prototyping. . . . 30 Table 3.2 Summary of preferred tools and techniques for project structure, UI design, collaboration and evaluation during me-fi prototyping. . . 32 Table 3.3 Summary of preferred tools and techniques for project structure, UI design, collaboration and evaluation during hi-fi prototyping. . . . 34 Table 7.1 Participant details and task allocation for workload analysis of Con- trol Group (Traditional) and Experimental Group (Eve) ...... 65 Table 7.2 Data collected using NASA-TLX for the entire process of prototyping. 70 Table 7.3 The number of UI elements the participants sketched, the number of elements correctly identified, wrongly identified and the number of unidentified UI elements during Workload Study...... 79 Table 13.1 Participant details and assigned application types for rapid prototyp- ing for workload analysis ...... 111 Table 13.2 Data collected using NASA-TLX for rapid prototyping...... 114 Table 13.3 Comparison of subjective workload, physical demand, mental de- mand, temporal demand, performance, effort and frustration of using UI design pattern-driven and traditional approach to rapid prototyping...... 115

xxv xxvi List of Tables

Table 15.1 Experience in years and accessibility domains of the participants of our follow-up interviews...... 124 Table 19.1 Participant details and assigned application types for accessible UI prototyping for workload analysis ...... 155 Table 19.2 Comparison of subjective workload, physical demand, mental de- mand, temporal demand, performance, effort and frustration of using persona-driven (Personify) and traditional approach to acces- sible UI prototyping...... 158 Table 19.3 Data collected using NASA-TLX for accessible UI prototyping. . . . 159 Table a.1 Overview of commercial prototyping tools...... 168 Table a.1 Overview of commercial prototyping tools...... 169 Table a.1 Overview of commercial prototyping tools...... 170 Table a.1 Overview of commercial prototyping tools...... 171 Table a.1 Overview of commercial prototyping tools...... 172 Table c.1 Data collected using NASA-TLX during Lo-Fi prototyping...... 175 Table c.2 Data collected using NASA-TLX during Me-Fi prototyping...... 176 Table c.3 Data collected using NASA-TLX during Hi-Fi prototyping...... 177 1 INTRODUCTION

User interface (UI) prototyping1 is an iterative process that enables UI/UX designers to create interactive mockups of their UI designs. These prototypes can be used to brainstorm different solutions, communicate design ideas with peers, and further evaluate them with experts and end-users (Camburn et al., 2017). Throughout the soware development process, UI prototypes can serve to fulfill several purposes: an analysis artifact to explore the problem space with stakeholders, a requirements artifact to depict the initial vision of the system, a design artifact to explore the solution space of the system, a communication artifact to share and discuss various possible UI design(s) of the system (Ambler, 2004). In many cases, UI prototypes serve as a potential foundation for building the actual end product. Traditionally, UI prototyping process comprises of three high-level steps (Ambler, 2004). The initial step is to analyze the user needs to obtain an essential feature set and scope for ideating the solution. The next step is building the prototype, which implies converting abstract design ideas into something substantial. The final step is the evaluation of the prototype to validate whether it fulfills the user needs. However, the prototyping process is not straightforward but highly iterative. During the second and third step, UI prototypes evolve through three fidelities: low-fidelity (lo-fi), medium-fidelity (me-fi) and high-fidelity (hi-fi). The difference in fidelities can be considered in terms of the maturity of the UI design and interactivity. A lo-fi prototype can be a rough freehand sketch or a paper prototype, me-fi - a digital design based on the lo-fi sketches, and hi-fi - a refined interactive prototype, which closely resembles the end product. During lo-fi prototyping, designers typically use paper & pencil, tablet & stylus, and white- board & post-it approaches to quickly reify design ideas (Lancaster, 2004). These lo-fi sketches are more focused on fundamental structural matters rather than design and beautifying details (Coyette and Vanderdonckt, 2005). In terms of productivity, lo-fi sketches are 10 to 20 times easier and faster to create than me-fi and hi-fi prototypes (Duyne et al., 2002). UI/UX designers who conceptualize ideas using pen and paper tend to explore design more

1 In this dissertation, the terms Software Prototyping and User Interface (UI) Prototyping are used interchangeably. More specifically, we are interested in user interface designs of smartphone applications.

1 2 introduction

broadly. Contrarily, while using computer-based tools for lo-fi prototyping, they explore designs in depth (Virzi et al., 1996). Many designers believe that end-users consider paper prototypes unprofessional. However, lo-fi and hi-fi prototypes are equally good at revealing usability issues, even though users make significantly more remarks about hi-fi prototypes (Walker et al., 2002). Me-fi was introduced as an intermediate fidelity between the ease of lo-fi and authenticity of hi-fi (Engelberg et al., 2002). Me-fi enhances lo-fi sketches by focusing on improving the system’s design details, interaction, navigations, and usability. The main advantage of me-fi is that it gives the impression of a functioning application sufficient for usability evaluation. Still, it has a lower cost and requires less time to create and modify than hi-fi. Hi-fi prototype is a more polished and advanced version of me-fi. It is as close as possible to the end product in terms of features, appearance, and behavior (Walker et al., 2002). However, the back-end might be simulated rather than implemented (Engelberg et al., 2002). Hi-fi is especially useful for conducting evaluations, illustrating design specifications for developers, and product marketing purposes (Rudd et al., 1996). However, making significant changes in the UI design and behavior of a hi-fi prototype is frustrating and expensive (Pernice, 2016). These fidelities exist as a continuum in the UI prototyping process. They are not mutually exclusive but rather are interconnected and interdependent. Each fidelity holds its own purpose in the UI design evolution. Using lo-fi prototypes, designers can explore and evaluate initial design ideas at a little cost. Doing so enables them to fix usability problems before spending more effort, time, and money on further implementing the UI design (Lancaster, 2004). Me-fi is built on top of what was discovered during lo-fi prototyping (Engelberg et al., 2002). In case lo-fi is skipped, and designers start prototyping from me-fi, it is an expensive and time-consuming process to explore multiple design ideas to acquire user feedback. The difference is due to the quick and dirty nature of lo-fi sketches, making them easier to change and cheaper to discard than me-fi designs (Lancaster, 2004). Similarly, hi-fi is developed based on the design, content, and interactions defined in me-fi (Walker et al., 2002). Since me-fi is evaluated before hi-fi is implemented, the UI design choices are already tested by end-users. In case me-fi is skipped, and the designers choose to develop hi-fi from scratch or based on lo-fi, the UI design, content, and interactions defined in such a hi-fi prototype will not be pre-evaluated by end-users or experts. Therefore, in case of a need to change the UI design, the front-end code will need to be altered, requiring much 1.1 thesis statement 3 more effort, time, and money than making a design change in lo-fi and me-fi (Engelberg et al., 2002; Lancaster, 2004; Pernice, 2016). Even though prototyping fidelities are well defined, one prototype can be categorized into more than one fidelity. For example, UI screens created using any graphics editor (e.g., Photoshop, Inkscape) are me-fi in terms of UI design quality. However, since such images do not respond to user interactions, they are lo-fi in terms of interactivity. Contrarily, prototypes developed using programming languages (e.g., HTML, Visual Basic) could be considered as hi-fi in terms of interaction. However, if their visual design is not mature, they will still be regarded as lo-fi in UI design (Ha et al., 2014). In this dissertation, we aim to study the soware prototyping process in-depth to analyze designers’ workflow and pain points. We intend to investigate the prototyping process from the workload perspective. Here, the term workload refers to the subjective percep- tion of the level of physical and cognitive burden experienced by the designer during the entire prototyping process (Gore, 2010). Considering different people have different skills and capabilities, we shall also investigate various methods for computing the subjective workload of prototyping, generalizable for a broad spectrum of users. We aim to support the UI prototyping process technologically and compare the impact of this technological support on the subjective workload experienced by the designers in the traditional (as-is) and technologically assisted (to-be) scenarios.

1.1 thesis statement

This dissertation explores the subjective workload experienced during the prototyping process and how technologically supporting this process affects this workload.

1.2 research questions

More specifically, we would like to investigate the following research questions:

RQ1 How much is the workload of soware prototyping?

RQ2 How can we support soware prototyping technologically?

RQ3 How would this technological support to soware prototyping impact its workload? 4 introduction

1.3 scope

This research is scoped for UI prototyping for smartphone applications. The formative investigations, proposed solutions, and their respective evaluations are scoped accordingly.

1.4 research approach

We followed the user-centered approach (ISO-9241-210, 2010) to our research. The User- centered design (UCD) process endorses research and design based on a precise understand- ing of target users, their tasks, and their natural environments. UCD addresses the entire user experience, and it involves users throughout the design and development process. It is an iterative process that is driven, refined, and evaluated by the target userbase. Following the UCD approach, we analyzed soware prototyping from three different aspects: traditional prototyping, rapid prototyping, and prototyping for accessibility. For each aspect, we started our investigation by conducting semi-structured interviews with UI/UX designers to understand their workflow, current practices, preferred tools, and pain points. Aer careful analysis, we proposed solutions for each aspect and evaluated their usabil- ity with UI/UX designers. Furthermore, we investigated the impact of using these novel solutions on the subjective workload experienced by UI/UX designers during soware prototyping.

1.5 rq1: how much is the workload of software prototyp- ing?

To investigate the as-is scenario, we studied the subjective workload experienced by 18 UI/UX designers during soware prototyping using the NASA Task Load Index (NASA-TLX) (S. G. Hart et al., 1988). Participants were asked to prototype a randomly assigned application and report their subjective perception of workload experienced for each fidelity. According to the results, the average workload experienced by the participants increased as they progressed from lo-fi to higher fidelities. Furthermore, the results reveal an increment 1.6 rq2: how can we support software prototyping technologically? 5 in frustration, temporal demand, effort, and in contrast, a reduction in the subjective perception of achieved performance with each fidelity (Sarah Suleri, Pandian, et al., 2019).

1.6 rq2: how can we support software prototyping techno- logically?

In the last 25 years, numerous academic and commercial tools were introduced to aid UI/UX designers during UI prototyping. A detailed discussion of these academic artifacts and commercial prototyping tools can be found in the Literature Review section. Upon reviewing these tools, we found that these prototyping tools tackle prototyping fidelities as standalone, distinct steps. However, in reality, they are interconnected and interdependent. In our research, we are interested in exploring designers’ workflow during UI prototyping, the problems they face due to the inadequacies of the current solutions, and the resulting workload experienced by them. Moving forward, we aim to address their problems by providing technological support to the prototyping process and later, study the impact of this technological support on their workload. In order to do that, we analyzed the prototyping process from the following aspects:

Traditional Prototyping

We conducted a formative user study using semi-structured interviews with 45 UI/UX designers. This study aimed to understand common practices, tools, and strategies UI/UX designers use during UI prototyping. We were also interested in finding out the evolution, inter-connectivity, and inter-dependency of all three prototyping fidelities in practice. Overall, our participants reported that UI prototyping involves numerous iterations of the UI design. The problematic part is not the reiterations but the rework. They reported that they have to start the design from scratch every time they had to transform one fidelity to another. They found it frustrating as it adds to their overall workload. To address this issue, we investigated existing tool support for semi-automating the transformation of LoFi to Me-Fi and then, Hi-Fi prototype using pattern recognition (Beryl et al., 2003; Caetano et al., 2002; Coyette, Faulkner, et al., 2004b; Landay, 1996; Perez et al., 2016) and deep learning for object detection (Beltramelli, 2017; Benjamin, 2017; Kumar, 6 introduction

2018; LeCun et al., 2015; Microso, 2018b). We discovered that these projects do not allow any flexibility in the sketch detection process. In essence, the user cannot choose when UI detection occurs or modify the detection results. So, once a UI sketch undergoes detection, it is not possible to return to the previous state. To address this research gap, we introduce Eve2 (Shishkovets, 2019; Sarah Suleri, Pandian, et al., 2019), a prototyping workbench that provides the users with a Canvas to sketch their concept as a low fidelity prototype. As per our preliminary semi-structured interviews with 18 UI/UX designers, 88% showed an inclination towards starting the UI design process by sketching their ideas as a Lo-Fi prototype. In the background, the UI Element Detector: MetaMorph3 (Pandian, 2019; Pandian, Sarah Suleri, Beecks, et al., 2020) detects the sketched UI elements using Deep Neural Networks (84.9% mAP, 72.7% AR). MetaMorph’s object detection model (RetinaNet) was trained using UISketch Dataset4 that contains 5,906 UI element sketches and 125,000 synthetically gener- ated lo-fi sketches (Pandian, Sarah Suleri, and Jarke, 2020). With the information provided by MetaMorph, the UI Element Generator creates the respective UI elements as Me-Fi. Lastly, the Code Generator converts Me-Fi to Hi-Fi as executable code. We evaluated Eve using SUS with 15 UI/UX designers; the results depict excellent usability and high learnability (Usability score: 89.5).

Rapid Prototyping

Next, we analyzed the soware prototyping process from the aspect of rapid prototyping. In agile development, lean UX designers conduct rapid prototyping to ensure quick releases. We interviewed 15 lean UX designers and examined rapid prototypes to understand their workflow during rapid prototyping. Participants reported compromise on the quality of UI design due to tight deadlines. They also reported developers’ inability to produce the same quality of UI design using front-end code due to the lack of UI design knowledge. UI design knowledge being scattered among numerous sources such as websites and books was also found problematic (Kipi, 2019; Sarah Suleri, Kipi, et al., 2019). To address these pain points, we propose a UI design pattern-driven approach for rapid prototyping. To realize this approach, we introduce Kiwi5: a web-based library that aims

2 https://designwitheve.com/ 3 https://metamorph.designwitheve.com/ 4 https://www.kaggle.com/vinothpandian/uisketch 5 https://designwithkiwi.com/ 1.6 rq2: how can we support software prototyping technologically? 7

at consolidating UI design knowledge in the form of UI design patterns and guidelines (Chi Tran, 2019; Kipi, 2019; Sarah Suleri, Kipi, et al., 2019). We address the scattered UI design knowledge problem by consolidating UI design knowledge from various non-academic (BBC, 2015; Mobiscroll, 2018; Outsystems, 2017; Sheibley, 2013; Toxboe, 2007; UXPin 2019) and academic (Crumlish et al., 2009; Duyne et al., 2002; Neil, 2014; Tidwell, 2010; Van Duyne et al., 2007) sources. Each UI design pattern consists of a problem statement (what), context (when), the ratio- nale (why), and a proposed solution (how). Additionally, Kiwi provides downloadable GUI examples, UI layout blueprints, and front-end code for each pattern. In addition to UI de- sign patterns, Kiwi contains UI design guidelines from various sources including academic research (Gong et al., 2004; Shitkova et al., 2015; Tarasewich et al., 2007; Weiss, 2002) and industry standards (Apple, 2018; Google, 2020; Microso, 2018a). Usability evaluation (SUS = 77.6) of Kiwi with 21 lean UX designers depict good usability.

Prototyping for Accessibility

To further our research, we investigated the workflow of designing accessible UIs. Accessi- bility in UI design leads to a more satisfying experience for all end-users, regardless of their abilities. We were interested in investigating the workflow of different UI/UX designers for creating UI designs that are accessible to users with visual, auditory, cognitive, speech, and motor disabilities. We surveyed 30 UI/UX designers, conducted 21 follow-up interviews, and analyzed 32 user profiling and UI design documents. Aer careful analysis, we identified the following pain points: limited access to the target user group, uncertainty regarding the priority of different aspects of users’ data, ignorance of UI design guidelines for accessibility, time constraints leading to disregarding accessibility of UI design. To address these issues, we introduce Personify6 (Shanmuga Sundaram, 2020), a UI design guidelines library that graphically organizes pre-existing UI design guidelines for accessibil- ity (Caldwell, Reid, et al., 2018) with respect to personas. These personas represent fictional characters with variations of visual, auditory, cognitive, speech, and motor disabilities. By introducing this library, we aim to associate accessibility guidelines with their respective personas to address the discoverability, findability, and usability of UI design guidelines for accessibility. As a result, we aim to support UX designers in utilizing UI design guidelines

6 https://designwithpersonify.com/ 8 introduction

for creating accessible UI designs. Usability evaluation (SUS = 76.4) of Personify with 16 UI/UX designers depict above average usability.

1.7 rq3: how would this technological support to software prototyping impact its workload?

Traditional Prototyping using Eve

Our workload analysis reveals that unlike the traditional prototyping approach, Eve’s com- prehensive support eliminates the need for switching between various prototyping tools while progressing through lo-fi, me-fi, and hi-fi. Consequently, there is a significant decrease in subjective workload experienced by UI/UX designers using the comprehensive approach offered by Eve. Also, there is a significant reduction in mental demand, temporal demand, and effort experienced by UI/UX designers using Eve. Compared to the traditional approach, the overall perceived performance increased five times using the comprehensive approach (Eve).

Rapid Prototyping using Kiwi

Concerning workload, our results indicate that the subjective workload experienced by UI/UX designers using the pattern-driven approach using Kiwi is significantly less than the workload experienced using the traditional approach of rapid prototyping. Specifically, there is a significant decrease in physical demand and effort of rapid prototyping while using the pattern-driven approach. However, there is no significant difference in subjective workload experienced while using UI design pattern libraries with and without pattern standard (Sarah Suleri, Hajimiri, et al., 2020).

Prototyping for Accessibility using Personify

Our results indicate that the subjective workload experienced by UI/UX designers using the persona-driven approach offered by Personify is significantly less than the workload experienced using the traditional approach of prototyping for accessibility. Specifically, 1.8 benefits 9 there is a significant decrease in mental demand and effort of prototyping accessible UIs while using Personify (Shanmuga Sundaram, 2020).

1.8 benefits

This research aims to analyze the creative process of soware prototyping from a work- load perspective. We begin by performing a pain point analysis with UI/UX designers for traditional prototyping, rapid prototyping, and prototyping for accessibility. We address the rework and time constraint problem by semi-automating the fidelity transformation process using deep learning. Additionally, the UI design pattern-driven approach addresses this issue by providing pre-built solutions to repetitive problems. This research aims to provide designers with comprehensive solutions that support the entire process of prototyping. We provide UI design knowledge in a unified library to address the scattered knowledge problem. We represent end-users in terms of personas to increase their visibility and make it easier for designers to empathize with the target users. Lastly, we aim to assist the designers in communicating design to developers in terms of blueprint and front-end code. This work aims to extend prior work on UI prototyping and is broadly applicable to understand the impact of using deep learning, UI design patterns, and personas on the workload of UI prototyping.

1.9 outline

We have organized this thesis into the following chapters:

Literature Review: In this chapter, we review the state-of-the-art commercial and aca- demic soware prototyping tools. Additionally, we summarize various methods and techniques to evaluate the workload experienced during any given task.

Traditional Prototyping: This part comprises of six chapters to describe our formative research regarding the designer’s workflow of traditional prototyping, our proposed solution to their problems, usability evaluation of the solution, and the impact of using the proposed solution on the workload of traditional soware prototyping. 10 introduction

Rapid Prototyping: This part comprises of six chapters to describe our research regarding the designer’s workflow of rapid prototyping, our proposed solution to their problems, usability evaluation of the solution, and the impact of using the proposed solution on the workload of rapid prototyping.

Prototyping for Accessibility: This part comprises of six chapters to describe our research regarding the designer’s workflow of prototyping accessible UIs, our proposed solution to their problems, usability evaluation of the proposed solution, and the impact of using the proposed solution on the workload of prototyping for accessibility.

Conclusion: In the final chapter of this dissertation, we will summarize the results of the individual research projects we presented. 2 LITERATUREREVIEW

This chapter explores further the topic of technological support provided by numerous academic and commercial tools to soware prototyping. It first summarizes various features provided by academic prototyping tools introduced in the last 25 years. Then it explores numerous state-of-the-art commercial tools that directly or indirectly support soware prototyping. Finally, it summarizes the extent of technological support provided by these academic and commercial tools and points out their shortcomings. Another focus of this chapter is exploring various techniques to measure and evaluate the workload experienced during any given task. It also highlights the advantages, disadvantages, and challenges of each technique. This chapter ends by describing the workload measurement technique we have chosen for our research, the rationale behind this selection, various challenges associated with the technique, and how we plan to address them.

2.1 software prototyping tools

To contextualize our work, we draw upon prior research in academic and commercial pro- totyping tools (Silva et al., 2019). We reviewed 155 prototyping tools and mainly focused on investigating how these tools support different fidelities of prototyping and the various fea- tures they offer (Shishkovets, 2019). The first section summarizes 15 academic prototyping artifacts (Table 2.1). The second section is a comparative summary of 140 commercial pro- totyping tools (Appendix a.1). The third section reviews both the academic and commercial tools that provide intelligent support to UI prototyping.

2.1.1 Academic Prototyping Tools

Most of the academic prototyping tools enable designers to digitally sketch their lo-fi designs using a stylus on PC (Figure 2.1) and digital whiteboard. On the other hand, ProtoMixer and

11 12 literature review

Lo-Fi Me-Fi Hi-Fi Tools Design Interaction Storyboard Design Interaction Design Interaction Preview Intelligence Detects Sketch Upload Code

SILK (Landay, 1996) UI widgets Rubine recognizer Rectangle, squiggly line, straight line, ellipse

DENIM (J. Lin, Mark W Newman, et al., 2001)

JavaSketchIt (Caetano et al., 2002) Java CALI recognizer, 10 UI elements visual grammar

Freeform (Beryl et al., 2003) VB form Rubine recognizer

DEMAIS (Bailey et al., 2003)

InkKit (Chung et al., 2005) Microso text recognizer, Rubine recognizer

ProtoMixer (Petrie et al., 2007)

SketchiXML (Coyette and Vanderdonckt, 2005) UsiXML Visual grammar 32 widgets

GRIP-it (Van den Bergh et al., 2011) XAML

UISKEI (Segura et al., 2012) XML Levenshtein distance, 7 primitive shapes Douglas-Peucker algorithms and 8 UI widgets

Smart Pen (Ha et al., 2014)

UsiSketch (Perez et al., 2016) UsiXML Pattern-recognition 8 basic shapes based algorithm and 32 widgets

Xketch (Li et al., 2017)

Freestyle (Narendra et al., 2019) CALI recognizer 4 primitive shapes (line, rectangle, circle, triangle)

Eve (Sarah Suleri, Pandian, et al., 2019) XAML, Java DNN 21 UI elements

Table 2.1: Overview of academic prototyping tools in comparison with Eve.

GRIP-It enable users to upload their previously drawn paper sketches for defining behavior. However, digital lo-fi sketching has an advantage over paper-based sketching due to the editing features such as undo, redo, copy, and paste. The usual method of defining the behavior of lo-fi prototypes is by drawing arrows to connect screens and providing an overview of all connected screens using storyboards. Defining interactions and storyboards for me-fi and hi-fi prototypes are not supported by any tools. None of the academic prototyping tools support designing me-fi prototypes. However, SILK, JavaSketchIt, Freeform, SketchiXML, GRIP-It, UISKEI, and UsiSketch are a few tools that attempted to detect constituent UI elements of a lo-fi sketch using pattern recognition algorithms. As a result, they produce the respective front-end code. For collaboration and evaluation, DENIM and GRIP-It provide run mode to preview the UI designs. The main advantage of this mode is that it enables the designers to evaluate the UI during the design phase, even before a fully functioning prototype is available. 2.1 software prototyping tools 13

Figure 2.1: SILK sketching and storyboarding for a weather application (Landay and Myers, 2001)

2.1.2 Commercial Tools

In the last 25 years, numerous commercial tools were also introduced in addition to aca- demic prototyping artifacts. Some of these tools such as InVision, Balsamiq, Justinmind, Figma, and Adobe XD are very popular among UI/UX designers to produce ‘Proof of Concept’ soware prototypes (UserTesting, 2019). This section compares, contrasts, and highlights the most interesting features of 140 commercial prototyping tools. These features include the support for creating designs, defining behavior, and conducting evaluations of lo-fi, me-fi, and hi-fi prototypes. Out of 140 commercial tools, 22 drawing tools are not essentially prototyping tools but afford lo-fi prototyping by offering basic drawing features. Contrarily, 31 commercial proto- typing tools support sketching UIs. Overall, 17 out of these 31 tools offer freehand sketching using a mouse, finger-touch, or digital pen. The other ten tools offer drawing using Bezier tool. The remaining four tools offer both freehand sketching and the Bezier tool. Overall, eight tools offer to drag and drop preexisting UI element sketches to create lo-fi prototypes. We observed that ten tools support both lo-fi and me-fi prototyping. On the contrary, 49 out of 140 prototyping tools do not support lo-fi prototyping, but instead, expect designers to start prototyping process from me-fi. The design of me-fi is usually created by using pre-existing templates or dragging and dropping UI elements from the tool’s widget libraries. For creating interactions, 53 out of the 65 tools use hotspots, eight tools use hyperlinks, and seven tools use code. Some of these tools also support creating advanced interactions, such as swiping, long press, pinching in/out, and timer-based transitions. 14 literature review

These tools generate the respective interaction map, a.k.a storyboard view with all the screen transitions shown by arrows. Overall, 37 tools offer a preview mode, where designers can preview and evaluate their prototype. Applications such as Proto.io and InVision have a mirroring application for evaluating prototypes on the target platform. They also have the option of recording how users use a prototype on the target device. Additionally, 14 tools collaborate with UserTesting platform to quickly get valuable feedback on the prototype. Overall, 33 tools support exporting code; 18 of them generate clean, readable code; 10 tools generate machine-readable code, and the remaining five tools produce code snippets. Additionally, five intelligent prototyping tools convert UI sketches and screenshots to front- end code using deep learning. Only 23 tools offer the ability to share prototypes with others using a unique URL. 15 tools support collaboration using comments, voice chat, and annotating directly on the UI design.

2.1.3 Intelligent Fidelity Transformation

This section summarizes few academic and commercial projects that attempted to automate the transformation of lo-fi to higher fidelities by using classical pattern recognition and deep learning. They are discussed successively.

Pattern Recognition Based Fidelity Transformation

Overall, 8 academic artifacts adapted classical pattern recognition algorithms like Rubine (Rubine, 1991), CALI (Fonseca et al., 2002), and custom recognizer (Segura et al., 2012) to transform lo-fi sketches to XML, SVG, Java, Visual Basic, and UsiXML code (Figures 2.2). These tools mainly focus on UI sketch recognition and respective code generation. They do not support designing UIs and creating interactions. Although these tools are considered intelligent in providing support to UI prototyping, they only support the conversion of lo-fi to code and not the prototyping process itself. From the technical perspective, using pattern-recognition approaches is a challenging task. Pattern recognition algorithms are required to be carefully engineered to identify particular features, analyze them, and lastly, to detect UI components. Therefore, recently, 2.1 software prototyping tools 15

Figure 2.2: Interface widgets that SILK recognizes during sketching (le) and in the transformed interface (right) (Landay and Myers, 2001)

Deep Neural Networks (DNNs) are used to approach UI element detection tasks (LeCun et al., 2015).

DNN Based Fidelity Transformation

Pix2Code is the only academic artefact that uses deep learning to convert me-fi UI screen- shots to domain-specific language (DSL) code. Later, 4 commercial projects, Sketch-Code, Airbnb’s Sketching Interfaces, UIzard (based on pix2code), and Sketch2Code were intro- duced as small-scale projects to convert in-house hand-drawn lo-fi sketches to front-end code using deep learning. However, their system architecture, training data, and perfor- mance details have not been revealed. Similar to the aforementioned academic artifacts that utilize pattern recognition based fidelity transformation, these tools solely focus on the conversion of paper-based lo-fi sketches directly to hi-fi code. They do not offer any support to UI design beautification and creating interactions. Therefore, these tools are not fundamentally prototyping tools, but rather fidelity transformation tools. Since these tools do not afford lo-fi and me-fi, users are required to sketch UIs using paper & pen technique and upload these UI sketches into the system for further processing. These tools utilize deep learning to detect the constituent UI elements to generate the respective code. However, the user has no control over the transformation process. 16 literature review

2.1.4 Summary & Identified Gaps

Our literature review highlighted that the existing prototyping tools tackle each fidelity as a standalone step in UI prototyping. However, in practice, all three fidelities are intercon- nected and interdependent.

• Some drawing tools only support sketching activities, without providing any further support for prototyping. • Most tools start the design process from me-fi and do not support sketching lo-fi UIs. • Some tools that support UI design in me-fi do not afford to define interactions. • A few tools partially support both lo-fi and me-fi but either produce CSS code snippets or do not support hi-fi at all. • Most tools do not offer the ability to preview the UI design and simulate the defined behavior for evaluation.

Intelligent fidelity transformation tools do not allow any flexibility in the sketch detection process. In essence, the user cannot choose when UI detection occurs, or modify the results of the detection. So, once a UI sketch undergoes detection, it is not possible to return to the previous state.

• Intelligent tools that recognize the UI sketches do produce some front-end code, but they do not provide any support for UI design beautification and behavior definition. • Furthermore, they do not take into account the designer’s preferences: as they impose their default configurations.

In summation, we observed that despite numerous academic and commercial prototyping tools that directly or indirectly support one or more fidelities of UI prototyping, there is a lack of research on providing technological solutions to address the problems faced by designers during prototyping.

2.2 workload analysis

As stated previously, we are interested in investigating the load or burden experienced by designers while prototyping any design concept. For this purpose, we referred to various 2.2 workload analysis 17 methods and techniques that can be used to perform a workload analysis in different situations.

2.2.1 Why Measure and Evaluate Workload?

Human factors research has been interested in investigating the relationship between work- load and performance for years. Earlier studies have shown that overworked people show a hasty approach towards work and, therefore, make more mistakes, are more frustrated, and fatigued. Interestingly, underworked people show similar symptoms due to boredom, attention deficit, and baseless contentment towards their performance. Therefore, it seems that the sweet spot is somewhere in the middle of being underworked and overburdened. These observations highlighted two main aspects of the relationship between workload and performance: a quantitative measure of workload experienced by a person, and defining reasonable boundaries for evaluating workload.

2.2.2 Challenges to Measuring Workload

The very first challenge that arises while measuring workload is defining the term itself. Different people understand the term Workload in different ways. Some people perceive workload as physical, while others believe it to be more cognitive. For instance, it may refer to the amount of work performed and also the physical and cognitive burden experienced while working. The second challenge is due to the differences in experience, skills, and abilities among different people. For example, a highly-skilled designer might experience a fraction of the newbie designer’s workload doing the same task for the first time. Despite the tricky notion of workload, there are several ways of quantifying the ex- perienced workload. The following sections summarize various workload measurement techniques, highlighting the advantages and disadvantages of each technique. 18 literature review

2.2.3 Workload Measurement Techniques

This section provides an overview of four different measurement approaches, describes var- ious measurement techniques belonging to each approach, and summarizes their benefits and drawbacks.

Performance Measures

Performance measures of workload focus solely on objectively measuring the task per- formed by a person (Muckler et al., 1992). Workload measurement techniques focusing on performance measures include measuring speed and accuracy, measuring steps, and performing task analysis.

measuring speed and accuracy The most straightforward performance measure- ment technique measures the speed or accuracy at which a person performs a task (Karat et al., 1999). Measuring speed and accuracy can be performed using a stopwatch to mea- sure the time needed to accomplish a task, and noting the person’s progress. The main benefit of using this technique is the minimal effort required of the experimenter to decide whether a person’s performance is acceptable. If performance is acceptable, the workload is considered to be acceptable. On the other hand, measuring just the speed and accuracy is somewhat inconsiderate to the person’s condition during the task. For instance, performance measure completely dis- regards if a person is severely overburdened or underperforming. This can be problematic because the lengthy time spent performing a task may cause fatigue, boredom, and other acceptable conditions that can impact the performance.

measuring actions This performance measure is more susceptible to the person’s state by focusing on measuring the actions taken to accomplish a task. These actions may include oral communication, mental computations, decisions taken, and any visual exploration required during that task (Fairclough et al., 1993). The number of actions performed during a task is directly proportional to the workload experienced. The primary benefit of this technique is the ease and simplicity it offers to the experimenter in measuring the workload. A fundamental drawback of this technique is that it does not address the concept of workload straightforwardly. The number of actions taken to accomplish a task does not 2.2 workload analysis 19 essentially imply less or more workload. This approach also overlooks the skill, capability, and experience differences between different people. task analysis This technique is a variation of the measuring actions technique. Instead of analyzing the actions performed during a task, it focuses on counting the number of procedural steps required to accomplish a task (Gray et al., 1993). The primary benefit of this technique is that it does not require a person to perform a task to calculate the workload. Instead, the experimenter analyses the procedural steps on his own. However, this technique supposes that everyone will follow the same steps to accomplish the task and disregards skill, capability, and experience variability among different people. Despite its simplicity, this technique requires time-consuming effort from the experimenter.

Indirect Measures

An indirect means of measuring workload implies determining how much capacity a person is le with while performing a task. So, the level of workload inflicted by the primary task is estimated based on how well the person performs another task simultaneously (Strayer et al., 2006). If the person can comfortably perform both tasks simultaneously, then it is concluded that the primary task imposes a moderate workload. Otherwise, it is concluded that the primary task demands all the person’s capabilities and imposes a high workload. However, the outcomes of indirect measures are highly influenced by which task is chosen as the secondary task. There are two guidelines for choosing an appropriate secondary task: 1) Both tasks should use the same resources, 2) The secondary task should require an ample amount of effort to finish (Fisk et al., 1983). A few recommended secondary tasks include sorting cards (Lysaght et al., 1989), mental math, monitoring alerts, tapping, and classification tasks (Gawron, 2019). The main benefit of indirect measures is their awareness of the person’s condition through- out the task. However, they are entirely negligent of variations of skill, capabilities, and strategies to perform any task. Indirect measures are presumptuous about the effects of a secondary task on the primary task. For instance, a well-performed secondary task might negatively impact the primary task. As it is not apparent when a person decided to neglect the primary task to pay attention to the secondary task, the indirect measures can not confidently conclude the primary task’s level of workload. 20 literature review

Subjective Measures

Subjective workload measures require the person performing the task to report their subjec- tive perception of the experienced workload (Vidulich and Tsang, 1987). These measures do not cater to the kind of task at hand or the person’s performance during the task (Moroney et al., 1995). There are two types of subjective workload measurement techniques:

subjective numerical measurement techniques These techniques require the person to assign a numerical or ordinal value to their subjective perception of the workload they experience during a task.

• Instantaneous Self-Assessment (ISA): This technique requires a person to rate the subjective perception of their workload on a scale from 0 to 100. The primary benefit of this technique lies in its simplicity and ease of data collection (Tattersall et al., 1996). A fundamental drawback of this technique occurs due to the varying definitions of workload among different people. The differences in the way people understand the notion of workload can have drastic effects on workload measurement (Hering et al., 1996). Another limitation lies in the different perceptions of different parts of the 0 to 100 scale. Research has also shown that people associate workload with their performance. So, when they believe their performance is low, they may perceive that the workload is high (Yeh et al., 1988).

• NASA Task Load Index (TLX): This technique was created to mitigate several diffi- culties caused due to the varying definitions of workload among different people. It shows similarity to the ISA technique in periodically demanding the person to rate their subjective perception of the experienced workload (Hill et al., 1992). However, instead of using a single scale for rating workload, the TLX technique uses six different sub-scales. These sub-scales include Physical Demand, Mental Demand, Temporal Demand, Performance, Effort, and Frustration. Consequently, these six sub-scales play a part in accommodating six different aspects to define workload (Sandra G Hart, 2006). Using this technique, a person is required to rate the six sub-scales subjectively and assign weights to each sub-scale. These ratings and weights can be collecting during or aer the completion of the task. The overall workload is calculated by multiplying 2.2 workload analysis 21

each sub-scale rating with its weight and dividing it by the sum of all weights (21) (S. G. Hart et al., 1988). The main advantage of the TLX technique lies in accomodating different percep- tions of workload and eliminating prejudices regarding the impact of substandard performance on the workload (Casner, 2009; Yeh et al., 1988). The main drawback of the TLX technique is that it is comparatively more time- consuming. Similar to the ISA technique, it also undergoes the scale loading1 problem (Gore, 2010).

• Bedford: Like the ISA technique, this technique provides a person with a 1 to 10 scale to rate their workload. It additionally provides a detailed description of ratings to address the scale loading problem faced by the ISA and TLX techniques. It also presents a hierarchical decision tree to aid the process of picking the desired ratings. Using the Bedford technique, a person is required to navigate through the decision tree and reduce the workload ratings to approximately two to three options and then choose one rating based on its description (Roscoe, 1984; Roscoe and Ellis, 1990). The primary benefit Bedford provides is the verbal descriptions to explain interpreta- tions associated with each value of the rating scale. However, the interval between the ten ratings cannot be inferred as equal. Another limitation of this technique is that it can only be used aer completing the given task. It was also reported that with the passage of time and gradual familiarity with the Bedford scale, people did not feel the need to use the decision tree and proceeded directly to using the rating scale (S. G. Hart et al., 1988). Lastly, as a part of this technique, a person is asked to assess the spare capacity (Brown, 1962; Roscoe and Ellis, 1990). Similar to the problems associated with the varying definitions of workload, the term spare capacity can be interpreted in many ways. As a result, different people can give tremendously different ratings based on their understanding of the term.

subjective comparative measurement techniques These techniques require the person to compare two or more tasks to determine which task has a comparatively higher workload. A significant advantage of such techniques is that they do not require a person to

1 A person tends to rate 50 as the average value and linearly move towards either end to depict an increase or decrease in perceived workload (Gore, 2010). 22 literature review

assign numerical or ordinal values to depict the experienced workload, which removes the problems associated with the interpretation of the scale. However, these techniques offer limited means to determine why a particular task is causing a high workload. • Subjective Workload Dominance Technique (SWORD): This technique requires a person to perform a comparative analysis between different tasks using a comparison sheet containing a 17-element scale. The mid element on the scale implies that the workload experienced during the given tasks was approximately the same. Elements on the right or le of the midpoint indicate that the experienced workload was higher for one of the tasks (Vidulich, 1989; Vidullch et al., 1991). However, these relative mea- sures using comparisons may not always be consistent and reliable (S. G. Hart et al., 1988; Vidullch et al., 1991). Therefore, these comparisons undergo a statistical analy- sis to evaluate the degree of consistency and reliability estimate of the experienced workload (Budescu et al., 1986; C. Williams et al., 1980).

Physiological Measures

Physiological measures rely on objectively measuring the experienced workload by ana- lyzing the physiological changes in a person during a task (Braby et al., 1993). While many physiological measures have been considered over time, none has been proven to effectively measure the experienced workload (S. G. Hart et al., 1988).

heart rate Heart rate is one of the most straightforward means to measure workload. It can be monitored using a heart rate monitor that records heart rate approximately once per second (L. Mulder, 1992; Porges et al., 1992). Research shows that heart rate has a good correlation with physical tasks and a moderate correlation with mental tasks (Furedy, 1987; Jorna, 1992; Roscoe, 1992).

heart rate variability Heart rate variability examines the variation in time intervals between heartbeats using specialized equipment. A few studies (Metalis, 1991; Vicente et al., 1987) have successfully established a correlation between heart rate variability and mental workload.

evoked potentials These techniques capture the variations in electrical potentials in response to visual or auditory stimuli. However, such measurement techniques are beyond the scope of this research. 2.2 workload analysis 23

2.2.4 Discussion

This section discusses and reflects on the limitations of different measurement techniques discussed earlier. It also highlights the most suitable workload measurement technique for our research. As stated previously, the performance measurement techniques solely focus on per- formance, but their results are questionable because they fail to establish the causality between high workload and inadequate performance (Vidulich and Wickens, 1986; Yeh et al., 1988). They are also negligent of the current state of the person and skill variability among different people. Similarly, indirect measures can be unreliable if the primary and secondary tasks are not appropriately defined and related. While calculating the aggregate workload for multiple individuals, these measures tend to produce moderate workload results. Subjective measurement techniques capture the subjective perception of workload but face varying understanding of the scale used. Even aer using a standard scale, these techniques still face the problem of some people not revealing their actual states. Lastly, the physiological measures have a reasonable correlation with physical tasks and a moderate correlation with mental tasks. However, they have not yet been proven to measure the experienced workload effectively.

2.2.5 Choosen Technique

Considering the creative and artistic nature of soware prototyping, we did not consider performance, physiological and indirect measures suitable for evaluating workload. Instead, we selected the subjective numerical measurement technique for soware prototyping, both stepwise for each fidelity and then in totality for the entire process. As a result, this research shall (i) help understand the subjective workload experienced by UI/UX designers during soware prototyping and (ii) provide a first attempt at evaluating the impact of using technological support on the workload of soware prototyping.

2.2.6 Challenges & Proposed Solutions

A common problem faced during workload analysis is the scale loading problem, i.e., the participants tend to rate 50 as the average value and linearly move towards either end to 24 literature review

depict increase or decrease in perceived workload (Gore, 2010). One proposed solution is to apply more than one subjective workload measurement techniques and then compare results (Hendy et al., 1993). A problem with this approach is that it can be very time con- suming, as the entire process of soware prototyping can be very long and cumbersome in itself. Another approach is to thoroughly explain the scale and procedure to the participant prior to the actual rating. If possible, give an example with a sample scale. This explanation helps in avoiding misinterpretation of the scale (S. G. Hart et al., 1988). Additionally, participants can be asked to complete a reference task such as a mental calculation, and then assess its workload. Reference tasks help decrease the between-groups variability by better calibrating participants with the various dimensions of NASA-TLX (Gore, 2010). To address the question of “How high is high?” we shall rely on the significant workload range. The significant workload range, also known as the confidence interval, lies in the acquired data’s mean (+/-4). (Grier, 2015) Another question that arises would be for which tasks should workload be measured. One solution proposed is to measure workload throughout the entire process periodically (S. G. Hart et al., 1988). The resulting data can then be used to determine how the workload increases and decreases throughout the process. Keeping these decisions in mind, in the successive chapters, we shall explore soware prototyping from various aspects: Traditional Prototyping, Rapid Prototyping, and Pro- totyping for Accessibility. The upcoming chapters shall describe our research regarding the designer’s workflow of these three prototyping aspects, our proposed solution to their problems, usability evaluation of the solution, and the impact of using the proposed solution on the workload of soware prototyping. Part I Traditional Prototyping

Synopsis

Traditional UI prototyping involves the evolution of a concept into various stages of design such as low, medium and high fidelity prototypes. As a result of our formative study and review of existing prototyping tools, we proposed a feature list for providing comprehensive support to the entire UI prototyping process. Based on this feature list, we developed Eve, a sketch-based comprehensive prototyping tool. Eve enables UI/UX designers to sketch their ideas as lo-fi prototypes and generates the consequent me-fi and hi-fi prototypes automatically utilizing UI element detection using deep learning. As per the usability evaluation using SUS, Eve scored an average of 89.5 points out of 100, which implies overall excellent usability and learnability.

We further investigated the impact of using the comprehensive approach offered by Eve on the workload of UI prototyping. Our workload study using NASA-TLX revealed that the subjective workload experienced by UI/UX designers using a comprehensive approach (Eve) is significantly less than the workload experienced using the conventional method of UI prototyping. Specifically, there is a significant decrease in mental demand, temporal demand, effort and a notable increase in performance of UI/UX designers during UI prototyping while using the comprehensive approach (Eve).

3 FORMATIVESTUDYONTRADITIONAL PROTOTYPING

Aer reviewing existing academic and commercial tools, we conducted a formative user study using semi-structured interviews (Shishkovets, 2019). This study aimed to under- stand common practices, tools, and strategies UI/UX designers use during traditional UI prototyping. We were also interested in finding out the evolution, inter-connectivity, and inter-dependency of all three prototyping fidelities in practice.

3.1 participants

For our formative study, we recruited 45 participants (F=25, M=20) using purposive and snowball sampling. Our recruitment criteria for participation in this study required par- ticipants to have at least one year of prior UI prototyping experience. Participants were 30.5 ± 7.5 (23-38) years old, had 2.93 ± 1.13 (1 - 5) years of prior experience in UI proto- typing. Our participants included 24 UX Designers and 21 Product Designers. They were compensated for their participation (Figure 3.1).

1-2 years 4-5 years 17.8% 24.4% Product Designers UX Designers

2-3 years 3-4 years 33.3% 24.4%

(a) Occupation (b) Prior Experience

Figure 3.1: Demographics and prior experience of participants of the formative study on traditional prototyping.

27 28 formative study on traditional prototyping

3.2 procedure

We conducted semi-structured interviews (~35 min) in the natural environment of the participants. Each interview was audio-recorded and later transcribed. Interviews were conducted by one primary interviewer and one secondary interviewer (note-taker). To begin with, we explained the purpose of the formative study to the participants. We requested them to provide: informed consent, demographics, prior prototyping experience, and willingness to participate in the workload study later. Once this information was collected, participants were asked open-ended questions regarding their common prototyping practices. For example: "During the process of UI prototyping, what steps do you normally take?". They were also asked to focus on how they prototype in practice, rather than how it is known in theory—since understanding the real-life UI prototyping practices is much more essential and useful in our case. Then, depending on the prototyping fidelities named by the participant during the interview, each fidelity was discussed in more detail. During these interviews, we also gathered various documents regarding UI designs to analyze the documentation conventions of different UI/UX designers. Our primary focus was on investigating preferred tools, prototyping techniques, UI design, project structure, collaboration, and evaluation.

3.3 analysis

To analyze the data collected from our formative study, we followed the inductive analysis approach with affinity mapping from Grounded Theory methodology (Strauss et al., 1994). Using the open-coding approach, we developed an initial coding scheme based on our initial observations. Two coders independently coded two transcripts to refine the coding scheme. For further discussion, we used the affinity mapping technique to arrange the coding themes. Next, we iteratively checked another three transcripts individually. Aer a few iterations, both coders reached a near-perfect agreement (Cohen’s kappa, κ=0.82). 3.4 key findings 29

3.4 key findings

We organized our findings based on the three prototyping fidelities: lo-fi, me-fi, and hi- fi (Table 3.1, 3.2, 3.3). We discuss the preferred tools, prototyping strategies, UI design, project structure, collaboration, and evaluation practices for each fidelity. In addition, we discuss the inter-connectivity and inter-dependency of each fidelity with its preceding and proceeding fidelities. In the end, we discuss the overall challenges faced while transforming one prototyping fidelity to another.

3.4.1 Lo-Fi

Our participants always start their prototyping process by creating a lo-fi prototype using paper & pencil sketches (n=40, 88.8%), whiteboard & pen (n=2, 4.44%), and tablet & pen (n=8, 17.7%) techniques. Since rough UI sketches do not require too much effort, it gives them the feeling of ease in sketching down the ideas or discarding them by simply throwing them away. Even though UI/UX designers prefer the feeling of real paper, they still miss the simple undo, redo, copy, and paste functionalities that any digital tool can provide. One of the participants pointed out:

Redrawing almost the same screens again annoys me. I tried to use post-it stickers, but that becomes a little bit messy in front of the user (while evaluation). So, instead of post-it, I actually taped the part of the screens I don’t want to keep and then scan “ the rest. That way, reproducing screens is much quicker and less annoying. - S1P04

On the other hand, our participants (n=3, 6.66%) also reported using Balsamiq for simu- lating sketched UIs rather than drawing them.

It (Balsamiq) gives it (lo-fi) a neat look. Especially for evaluation, it gives a good and clear impression of what the design is. Plus, I don’t trust my drawing skills to do the same. “ - S1P06 30 formative study on traditional prototyping

Fidelity Purpose Preferred Tools Percentage of participants

Paper and pencil 88.8%

UI design Whiteboard and pen 4.44% Tablet and pen 17.7% Balsamiq 6.66% Paper and pencil 64.4% Project Lo-Fi Whiteboard and pen 22.2% structure Tablet and pen 13.33% In-person meeting 84.4% Collaboration Emails 15.5% Design studio 75.55% In-person meeting 88.8% Evaluation Remote meeting 11.11%

Table 3.1: Summary of preferred tools and techniques for project structure, UI design, collaboration and evaluation during lo-fi prototyping.

Participants reportedly followed the same techniques for defining the structure of the project (Figure 3.2) as they did for UI design, paper & pencil (n=29, 64.4%), whiteboard & pen (n=10 , 22.2%), and tablet & pen (n=6 , 13.33%) techniques. As for collaboration, most participants (n=38, 84.4%) reported working on UI designs in a team during in-person meetings. Moreover, these meetings are more like a design studio sessions (n=34, 75.55%), where team members explain their ideas, combine them, and come up with one consolidated solution. Some participants (n=7, 15.5%) also reported scanning their UI sketches and emailing them to their team members for feedback. User evaluations during low fidelity are most oen performed in-person as well (n=40, 88.8%).

It’s a lot of hassle to do it (evaluation of lo-fi) remotely. I have done it over video calls, but handling so many paper sketches becomes very messy like that. “ - S1P21 However, some participants (n=5, 11.11%) reported conducting remote user evaluations by scanning their paper prototypes and adding interactions to them using me-fi prototyping tools such as Invision. 3.4 key findings 31

Figure 3.2: Project structure created during lo-fi prototyping, provided by S1P32 (Shishkovets, 2019)

3.4.2 Me-Fi

Our participants reported that as a prerequisite of creating a me-fi prototype, they always created a lo-fi prototype.

There are two kinds of situations. One, where we formally make a lo-fi prototype and get it evaluated by users. Second is when we are not given a lot of time and asked to quickly design a medium (fidelity). Even if we are not formally asked to create a lo-fi, “ we need to. How else would I discuss my design ideas with others. Medium (fidelity) just takes too much work and time for doing this. - S1P33

For me-fi prototyping, most of the participants reported that they produce UI screens in design tools such as Photoshop (n=27, 60%), Illustrator (n=25, 55.5%), Sketch app (n=37, 82.2%) by using their predefined widget sets. They further use these predesigned UI screens by uploading them in prototyping tools such as Invision (n=36, 80%) to create interactions. A few participants (n=2, 4.44%) mentioned using code (HTML, CSS) in order to design and connect UI screens. 32 formative study on traditional prototyping

Fidelity Purpose Preferred Tools Percentage of participants

Adobe Photoshop 60% Adobe Illustrator 55.5% MS PowerPoint 71.11% Figma 77.7% Sketch 82.2% UI design Framer 44.4% Marvel 33.3% Adobe XD 66.6% Moqups 15.5% Invision 80% HTML/CSS 4.44% Paper and pencil 71.1% Me-Fi Project Whiteboard and pen 35.5% structure UML diagram 11.11% Figma 15.55% In-person meeting 11.11% Remote meeting 71.11% Collaboration Calls 64.44%

Comments and annotations 75.55% Screen sharing 84.44% In-person meeting 24.44% Remote meeting 75.55% Evaluation Simulated device 55.55%

Actual target device 44.44%

Table 3.2: Summary of preferred tools and techniques for project structure, UI design, collaboration and evaluation during me-fi prototyping.

Overall, participants reported using multiple prototyping tools such as Microso Pow- erpoint (n=32, 71.11%), Figma (n=35, 77.7%), Framer (n=20, 44.44%), Marvel (n=15, 33.3%), Adobe XD (n=30, 66.6%), and Moqups (n=7,15.5%) for designing UIs and defining interactions between multiple screens. 3.4 key findings 33

Similar to lo-fi, during me-fi prototyping, designers use paper & pencil (n=32, 71.1%) and whiteboard & pen (n=16, 35.5%) techniques to define the structure of the project. Usually, it is a more detailed version of the project structure defined during lo-fi prototyping. On the other hand, some participants choose to use digital tools that support UML (n=5, 11.11%) and Figma (n=7, 15.55%) to define project structure in the form of diagrams. Collaborative working during this stage is a prevalent practice. Due to the digital format of the designs, participants collaborate most oen remotely (n=32, 71.11%) and sometimes during in-person meetings (n=5, 11.11%). Participants reported that they easily share design files or screens (n=38, 84.44%) for collaboration. They use audio/video calls (n=29, 64.44%), and comments (n=34, 75.55%) to discuss and collaboratively iterate on various designs. Similar to team collaboration, most of the participants reported conducting their user evaluation remotely (n=34, 75.55%) using simulated devices (n=25, 55.55%). On the contrary, some participants prefer to conduct the studies in-person (n=11, 24.44%) using the actual target devices (n=20, 44,44%).

3.4.3 Hi-Fi

The term High Fidelity seemed to be ambiguous for the participants. Some of them, espe- cially newbie designers, consider that it is almost the same as me-fi, but more detailed in terms of design. So, they use the same tools as me-fi, such as Adobe XD (n=6, 13.33%), Figma (n=9, 20%), and further polish their designs. On the contrary, other participants (n=35, 77.7%) said that it is almost a complete working system, with only one difference from the final product: some functions can be mimicked using Wizard of Oz (S. Dow et al., 2005). Depending on the purpose and targeted operating system (OS), they use different programming languages such as Java, JavaScript, HTML, CSS, Swi, C++, etc. In order to understand the inter-dependency of fidelities, participants were also inquired about situations where they have or hypothetically would skip lo-fi or me-fi before creating a hi-fi prototype.

Skipping low and medium is calling for trouble. It’s like coding without any planning. I can’t just start implementing something without playing around with the design first. In hi-fi, it’s going to take up all my time in just doing that. “ - S1P09 34 formative study on traditional prototyping

Going from low (fidelity) to high (fidelity) directly without making mid (medium) fidelity would miss a lot of information. See, low gives me a wireframe, mid gives me info like colors, fonts, content, and so on. If I don’t have that info, I will have to do “ these same design tasks that are meant for mid, in high. The problem is I won’t just be editing in Photoshop; I will be coding it. That’s just too much work. - S1P39

Fidelity Purpose Preferred Tools Percentage of participants

Programming language 77.7% UI design Adobe XD 13.33% Figma 20% IDEs 75.5% Project UML diagram 24.44% structure Whiteboard and pen 15.55% In-person meeting 15.55%

Hi-Fi Remote meeting 75.5% Issue tracker 80% Collaboration Version control 84.44% Email 35.55%

Audio/video calls 88.88% In-person meeting 68.88% Remote meeting 31.11% Evaluation Simulated device 62.22%

Actual target device 37.77%

Table 3.3: Summary of preferred tools and techniques for project structure, UI design, collaboration and evaluation during hi-fi prototyping.

Participants who use programming languages to create hi-fi prototypes reported that they use the respective IDEs to maintain the project structure (n=34, 75.5%). Others reported creating formal UML diagrams (n=11, 24.44%) or whiteboard & pen technique (n=7, 15.55%) to define project structure. Most of the collaborative work is done remotely (n=34, 75.5%). Normally, the features are distributed among team members using an issue tracking platform such as JIRA (Atlassian, 3.5 summary 35

2002) (n=36, 80%) and version control system such as Git (GitHub 2008) (n=38, 84.44%). They communicate through emails (n=16, 35.55%) and audio/video calls (n=40, 88.88%). Contrarily, some participants also reported collaborating in person (n=7, 15.55%) during their regular team meetings. Evaluations of hi-fi prototypes is mostly conducted in person (n=31, 68.88%) and less oen conducted remotely (n=14, 31.11%). Participants reported using simulated devices (n=28, 62.22%) to present their systems, especially during remote evaluations. They also explained that using simulated devices is more convenient and affordable while conducting a large number of user studies. On the other hand, some participants (n=17, 37.77%) reported conducting evaluations using the actual target devices during in-person evaluation sessions.

3.5 summary

Overall, our participants reported that UI prototyping involves numerous iterations of the UI design. The problematic part is not the reiterations but the rework (n=38, 84.44%). They reported that every time they had to transform one fidelity to another, they have to start the design from scratch. They found it frustrating as it adds to their overall workload.

For lo-fi, I usually come up with quite a few designs, and we discuss it in a team and select one or a combination of a few as one (design). Once a lo-fi design is done and evaluated, I make the me-fi for it. The problem is even if I scan my lo-fi sketches, I still “ have to start designing (me-fi) from the start. Same for hi-fi. I mean, I already reached a good point in my lo-fi design, why do I have to start over with me-fi? - S1P28

Overall, participants reported using an average of six different tools to create all three fidelities of one UI prototype. At least they used four tools (S1P17), and at max, they have reported using 13 tools for creating all three fidelities for one UI design (S1P20). Participants expressed their frustration regarding switching between multiple tools to maintain one UI prototype (n=31, 68.88%). 36 formative study on traditional prototyping

3.6 next steps

The results of our formative study summarized the UI prototyping practices, challenges, and the most commonly used prototyping tools. We further analyzed these findings to enlist the features needed by UI/UX designers during various fidelities of UI prototyping. We expanded this list by including the common features we discovered during the review of existing academic and commercial prototyping tools. This feature list enhances the requirements for a prototyping tool suggested by Coyette and Vanderdonckt (2005) and Ha et al. (2014). As a result, we aimed to formulate a feature list to provide technological support to address the pain points of UI/UX designers during traditional prototyping. We further aim to analyze the impact of this comprehensive support on the workload of traditional prototyping. 4 FEATURELISTOFPROPOSEDSOLUTION

Our formative study aimed to understand common practices, tools, and strategies UI/UX designers use during traditional UI prototyping. Moving forward, to address the workflow and pain points of UI/UX designers highlighted in our formative study, we propose provid- ing comprehensive technological support to the entire prototyping process. Therefore, we draed a list of features that are required to provide comprehensive support to UI proto- typing (Shishkovets, 2019). The findings from the formative study helped us understand which features of current prototyping tools are popular among UI/UX designers and what additional features are desired to address their pain points. Therefore, this feature list con- solidates numerous features from various tools from our Literature Review and highlights their popularity among UI/UX designers, as per our Formative Study. Our feature list is categorized into seven sections: project management, fidelities & modes management, screen management, user input, fidelity transformation, interaction map, and lastly, collaboration & evaluation. They are discussed successively.

4.1 project management

UI/UX designers should be able to organize their projects and have an overview of them (Bailey et al., 2003; Beryl et al., 2003; Caetano et al., 2002; Chung et al., 2005; Coyette, Faulkner, et al., 2004a; Coyette and Vanderdonckt, 2005; Landay, 1996; Li et al., 2017; J. Lin, Mark W Newman, et al., 2000, 2001; Narendra et al., 2019; Mark W. Newman and Landay, 2000; Mark W. Newman, J. Lin, et al., 2003).

Thus, the system should be able to:

1. Create a new project and name it [Study 1 (n=34, 75.5%)] 2. Delete, edit, preview, and open the project [Study 1 (n=22, 48,89%)] 3. Show the various attributes of the projects such as name, platform, and total number of screens [Study 1 (n=12, 26.66%)]

37 38 feature list of proposed solution

4.2 fidelities and modes management

These features refer to the various modes and fidelities of prototyping. A comperehensive system support all three fidelities: low, medium, and high. For each fidelity, the system should have three main modes: design, interaction, and preview. (Bailey et al., 2003; Beryl et al., 2003; Caetano et al., 2002; Chung et al., 2005; Coyette, Faulkner, et al., 2004a; Coyette and Vanderdonckt, 2005; Figma; Landay, 1996; Li et al., 2017; J. Lin, Mark W Newman, et al., 2000, 2001; Narendra et al., 2019; Mark W. Newman and Landay, 2000; Mark W. Newman, J. Lin, et al., 2003)

Hence, the system should support:

1. Design mode to work on the appearance of the prototype [Study 1 (n=37, 82.2%)] 2. Interaction mode to work on the behaviour of the prototype [Study 1 (n=35, 77.7%)] 3. Preview mode to view and evaluate the prototype [Study 1 (n=32, 71.11%)] 4. Switching between modes at any point 5. Multi-fidelity: switching between fidelities (low, medium, high)

4.3 screen management

These features are regarding the management of screens and different templates to design the UIs (Bailey et al., 2003; Beryl et al., 2003; Caetano et al., 2002; Chung et al., 2005; Coyette, Faulkner, et al., 2004a; Coyette and Vanderdonckt, 2005; Ha et al., 2014; Landay, 1996; Li et al., 2017; J. Lin, Mark W Newman, et al., 2000, 2001; Narendra et al., 2019; Mark W. Newman and Landay, 2000; Mark W. Newman, J. Lin, et al., 2003).

The system should support:

1. Different platforms such as phone, tablet, desktop, web, and watch [Study 1 (n=16, 35.55%)] 2. Different screens resolutions [Study 1 (n=39, 86.66%)] 3. Screen management features: create, delete, duplicate screens [Study 1 (n=33, 73.33%)] 4. Naming the screen [Study 1 (n=28, 62.22%)] 4.4 user input 39

5. Use existing screen as template [Study 1 (n=17, 37.77%)] 6. Drawing features: pencil, eraser, and ruler [Study 1 (n=40, 88.8%)] 7. Colour palettes for the UI screens [Study 1 (n=25, 55.55%)] 8. Predefined UI widget set [Study 1 (n=37, 82.2%)] 9. Editing features: cut, copy, paste, duplicate, and delete [Study 1 (n=8, 17.77%)] 10. Control features: undo, redo, and select [Study 1 (n=8, 17.77%)]

4.4 user input

This category of features refers to different means of input to create a UI prototype (Bailey et al., 2003; Beryl et al., 2003; Chung et al., 2005; Coyette, Faulkner, et al., 2004a; Coyette and Vanderdonckt, 2005; Ha et al., 2014; Klomann et al., 2013; Landay, 1996; Li et al., 2017; J. Lin, Mark W Newman, et al., 2000, 2001; Narendra et al., 2019; Mark W. Newman and Landay, 2000; Mark W. Newman, J. Lin, et al., 2003; Van den Bergh et al., 2011).

A comprehensive system should support:

1. Digital pen for a freehand sketch for any platform [Study 1 (n=40, 88.8%)] 2. Uploading an existing paper or digital prototype [Study 1 (n=24, 53.33%)]

4.5 fidelity transformation

These features are related to transformation of one prototyping fidelity to another (Bel- tramelli, 2017, 2018; Benjamin, 2017; Beryl et al., 2003; Caetano et al., 2002; Chung et al., 2005; Coyette, Faulkner, et al., 2004a; Coyette and Vanderdonckt, 2005; Kumar, 2018; Landay, 1996; Li et al., 2017; Microso, 2018b; Narendra et al., 2019; Perez et al., 2016; Segura et al., 2012).

Therefore, the system should support:

1. UI element recognition and translation into UI widget 2. Validation and modification of recognized UI element(s) 40 feature list of proposed solution

3. Generation of corresponding code in several programming languages [Study 1 (n=35, 77.7%)]

4.6 interaction map

This category of features is mainly regarding defining the behaviour of the prototype (Figma; Ha et al., 2014; Landay, 1996; Li et al., 2017; J. Lin, Mark W Newman, et al., 2000, 2001; Mark W. Newman and Landay, 2000; Mark W. Newman, J. Lin, et al., 2003; Petrie et al., 2007; Van den Bergh et al., 2011).

A comprehensive system should support:

1. Different types of interaction: Screen transition, URL transition, Timer switch, and Gesture transition [Study 1 (n=36, 80%)] 2. Adding, deleting, and editing interactions [Study 1 (n=42, 93.33%)] 3. Defining source and destination of interactions 4. Interaction map to provide an overview of all interactions [Study 1 (n=22, 48.88%)] 5. Alignment of the interaction map

4.7 collaboration and evaluation

This category is regarding saving and synchronization of the current state of the project among various team members (Figma; Li et al., 2017; J. Lin, Mark W Newman, et al., 2000, 2001; Mark W. Newman and Landay, 2000; Mark W. Newman, J. Lin, et al., 2003; Van den Bergh et al., 2011).

The system should support:

1. Saving all the changes 2. Version control [Study 1 (n=38, 84.44%)] 3. Real time synchronization among team members [Study 1 (n=24, 53.33%)] 4. Sharing the prototype with others [Study 1 (n=38, 84.44%)] 4.8 summary 41

5. Previewing the screen [Study 1 (n=28, 62.22%)] 6. Recording how the users experience the prototype 7. Comments and annotations: text, drawing, and audio messages [Study 1 (n=34, 75.55%)]

4.8 summary

Based on the findings of our formative study on the workflow and pain points of traditional UI prototyping, we draed a list of features required to provide comprehensive support to UI prototyping. Our feature list is categorized into seven sections: project management, fidelities & modes management, screen management, user input, fidelity transformation, interaction map, and lastly, collaboration & evaluation. Based on this feature list, we devel- oped Eve: a sketch-based UI prototyping tool that provides comprehensive support to the entire UI prototyping process.

5 EVE:ACOMPREHENSIVEPROTOTYPING WORKBENCH

Eve1,2 is a sketch-based prototyping tool that aims at providing comprehensive support to the entire UI prototyping process (Shishkovets, 2019; Sarah Suleri, Pandian, et al., 2019). Eve enables the UI/UX designers to sketch their designs as lo-fi and generates the respective me-fi and hi-fi automatically utilizing UI element detection using deep learning. Eve was implemented as a desktop application for Microso Surface Studio 2 (Surface Stu- dio 2 2018). We developed Eve using XAML and C# in Visual Studio using Entity Framework. Eve contains 45,200 lines of code. In the following sections, we will describe the features and UI design details of Eve.

5.1 projects management

As per the feature list described in Chapter 4, Eve enables users to create new projects (Figure 5.2) by providing a name, intended platform, OS, and means of input. Users can also manage their projects and have an overview of them (Figure 5.1). Within a project, users can save, share, and export the project.

Figure 5.1: Projects overview screen of Eve enables users to (A) view user information (B) view the list of projects (C) create a new project (D) sort the projects by last viewed, date created and project name (E) search projects by name.

1 https://designwitheve.com/ 2 https://github.com/sarahsuleri/Eve

43 44 eve: a comprehensive prototyping workbench

Figure 5.2: In Eve, creating a new project regime has three steps: (1) naming the new project and selecting the desired platform: Web, Desktop, Phone, Tablet or Smartwatch (2) choosing the OS (3) selecting the method of input

5.2 screen management

Users can create multiple screens within a project and have an overview of all screens. By default, each project has a blank screen provided to the user as a starting point. Moving forward, users can create a new screen, duplicate a screen, and create a new screen based on the master screen (if the master screen was created before). The master screen acts as a template to create similar screens. Changes made to the master screen are also replicated in all child screens.

5.3 fidelities and modes management

Once a project is open, at any point, the user can navigate between all fidelities: low, medium, and high (low is selected by default). Each fidelity has three modes: design, interaction, and preview (design is selected by default). Each fidelity and its subsequent modes are discussed successively. 5.4 design 45

5.4 design

The design mode mainly deals with defining and polishing the appearance of the UI design.

Lo-fi

The design mode of lo-fi (Figure 5.3) provides the users with essential drawing tools such as pencil, eraser, and ruler to sketch lo-fi designs on the canvas provided with the device template.

Figure 5.3: In Eve, users can sketch their designs in design mode of lo-fi.

The lo-fi design mode also provides fundamental control features (undo, redo, select), editing features (cut, copy, paste, delete, duplicate), and collaboration features (share, comment). Users can also zoom in/out to adjust the size of the canvas as per need. Users are also provided with an overview of all the sketched screens for quick access (Figure 5.4).

(a) Pencil (b) Ruler (c) Edit menu

Figure 5.4: In Eve, user can draw, erase, and edit a lo-fi sketch. 46 eve: a comprehensive prototyping workbench

Fidelity Transformation

To automate the transformation of lo-fi to higher fidelities, we developed MetaMorph3,4 (Pandian, 2019; Pandian, Sarah Suleri, Beecks, et al., 2020; Sarah Suleri, Pandian, et al., 2019): a UI element detector that utilizes deep learning to identify UI elements from a lo-fi sketch. Unlike the existing intelligent commercial prototyping tools (Beltramelli, 2017, 2018; Benjamin, 2017; Kumar, 2018; Microso, 2018b), MetaMorph solely focuses on detecting the type and location of UI elements from lo-fi sketch instead of transforming them to code directly.

Figure 5.5: In Eve, users can view and modify the detected UI elements in lo-fi sketches and the corresponding generated UI widgets in me-fi.

We developed MetaMorph with a Deep Neural Network (DNN) using TensorFlow Object Detection API (TensorFlow). Its detection model is RetinaNet (T.-Y. Lin et al., 2017) based Single-Shot MultiBox Detection (SSD) network with Resnet backbone (Liu et al., 2015). To train MetaMorph’s object detection model, we generated Syn5 (Pandian, Sarah Suleri, and Jarke, 2020): a synthetic annotated dataset from UI element sketches. Syn was generated using the UISketch dataset6 (Pandian, 2019; Pandian, Sarah Suleri, and Jarke, 2021): a large- scale dataset of UI element sketches that contains 17,979 hand-drawn sketches of 21 UI element categories. Therefore, MetaMorph can identify 21 UI elements: alert, button, card, checkbox checked, checkbox unchecked, chip, data table, dropdown, floating action button, grid list, image, label, menu, radio button checked, radio button unchecked, slider, switch

3 http://api.metamorph.designwitheve.com/ 4 https://github.com/vinothpandian/MetaMorph 5 https://www.kaggle.com/vinothpandian/syn-dataset 6 https://www.kaggle.com/vinothpandian/uisketch 5.4 design 47 enabled, switch disabled, text area, text field, and tooltip. MetaMorph provides 82.94% mean Average Precision (mAP) with 73.14% Average Recall (AR) for detecting UI elements from lo-fi sketches. We integrated MetaMorph into Eve to automate the transformation of lo-fi to higher fidelities (Figure 5.5). When users sketch their lo-fi designs, MetaMorph API is called auto- matically in the background at regular intervals. The users can configure these intervals. Aer every interval, the API is only requested to detect the newly sketched or newly modified UI elements. The previously detected UI elements remain intact in higher fidelities. Users also have the ability to turn off/on the API call. In case the API call is turned off, the lo-fi sketches will not get detected. However, users can still continue with prototyping by manually creating their me-fi designs without intelligent assistance. The API status is depicted in three different states: inactive, active (i.e., API is being called), and connection error (Figure 5.6).

(a) (b) (c)

Figure 5.6: In Eve, users can view the current status of the API call in three different states: (a) inactive, (b) active (c) connection error.

Me-fi

Aer the lo-fi is sketched, users can switch from lo-fi to me-fi using a fidelity menu on the top le corner of Eve. As mentioned previously, Eve uses MetaMorph to identify which UI elements are present in the lo-fi sketch and where they are located. This object detection process enables Eve to transform a lo-fi sketch to me-fi by transforming the identified UI element sketches to the corresponding widgets. Users can switch between viewing detected UI elements (Figure 5.5) and generated me-fi design (Figure 5.7a). Unlike the existing intelligent fidelity transformation tools (Beltramelli, 2017, 2018; Ben- jamin, 2017; Kumar, 2018; Microso, 2018b) that directly convert lo-fi sketches to code, Eve enables users to control and monitor the conversion process. Users can change the UI ele- ment type, appearance, color, alignment, interaction, content, and other styling properties of each detected UI element. In case a certain UI element remained undetected, users can 48 eve: a comprehensive prototyping workbench

also detect and label it manually. To further enhance their designs, users can also drag and drop UI elements from a predefined widget set.

(a) Me-fi Design (b) Color Palettes

Figure 5.7: In Eve, users can view and further polish their UI designs in design mode of me-fi.

In addition to the screens overview, control, and collaboration features, the design mode of me-fi also provides users with the ability to choose the desired color palette to select different looks for their designs. Users can also define their custom palettes. The chosen color palette is applied to all the constituent UI elements of all screens (Figure 5.7b).

Hi-fi

Similar to me-fi, Eve automatically generates the hi-fi for the sketched lo-fi by generating executable code based on the appearance and behavior of the lo-fi prototype (Figure 5.8). Users can modify the hi-fi design, and the respective code is updated automatically. However, users cannot edit or execute the code directly in Eve.

Figure 5.8: In Eve, users can view and edit the automatically generated hi-fi design and code. 5.5 interaction 49

In addition to color palettes, widget set, and UI element properties, the hi-fi design mode provides users with advanced properties such as micro-animations, shadows, bevels, and embosses to further mature the UI design.

5.5 interaction

To define the behavior of the prototype, users can switch from design mode to interaction mode using the top mode menu bar in Eve. Interaction mode remains the same for all fidelities. In the interaction mode, users can create four types of interactions:

• Transition: Switching screen upon tapping a UI element • URL: Opening a URL upon tapping a UI element • Timer: Switching screens aer a specified time interval • Gesture: Switching screens upon performing a gesture

Aer all the interactions are defined, the users can view the corresponding interaction map (storyboard) of the prototype (Figure 5.9). Users can view, edit, or delete already created interactions at any time.

https://plants-zone.com/

2s

Figure 5.9: In Eve, the interaction map provides an overview of all interactions 50 eve: a comprehensive prototyping workbench

(a) Normal view (b) Full-screen view

Figure 5.10: Preview mode of Eve

5.6 preview

In each fidelity of the prototype, users can preview and evaluate the design and behavior of the prototype (Figure 5.10a). Preview mode remains the same for all fidelities. In preview mode, the users can navigate all screens by tapping on the arrows on the screen or using keyboard navigation keys. Users can also switch to a full-screen view (Figure 5.10b) and vice versa. The full-screen view is suitable for conducting evaluations, as it enables the users to experience the design and behavior of the prototype in a distraction-free environment.

5.7 collaboration

All three fidelities provide users with the ability to collaborate with their team members.

Sharing

Users can share the project with others by sharing a unique URL. Multiple users can syn- chronously work on the prototype together. The status bar shows the currently active users. 5.7 collaboration 51

Comments

During evaluation and collaboration, users can also post and respond to comments. Eve supports three kinds of comments: text, sketch, and audio. When someone posts a comment, others can see the indicator of comment next to the respective stroke or widget (Figure 5.11). By tapping on the indicator, users can preview the comment, resolve it, or reply to it. Users can also see an overview of all the posted comments using the comment icon on the screen’s top right corner.

(a) Text Comment (b) Draw Comment (c) Audio Comment

(d) Comments Overview (e) Respond to Comment

Figure 5.11: In Eve, user can view, reply and resolve a comment using the comment indicator.

Screen & Project Export

Users also have the capability to export all the lo-fi screens as sketches and data files with stroke information. The me-fi screens can be exported as JPEG and SVG images. The hi-fi screens can be exported as JPEG images and a project file containing front-end code that can be executed in the respective IDE. 52 eve: a comprehensive prototyping workbench

5.8 summary

Eve is a sketch-based prototyping tool developed according to the feature list identified in the last chapter. It is a comprehensive solution that aims at providing technological support to the entire UI prototyping process. Eve enables the UI/UX designers to sketch their UI designs as lo-fi and generates the respective me-fi and hi-fi automatically utilizing UI element detection using deep learning. Additionally, Eve provides various useful features regarding project management, design, interactions, preview, and collaboration of UI prototypes. 6 USABILITYEVALUATIONOFEVE

Aer the implementation of Eve, we proceeded with conducting its usability evaluation using the System Usability Scale (SUS) (Brooke, 1996). This study aimed to quantitatively evaluate the usability of the features, interactions, navigation, content, and UI design of Eve (Shishkovets, 2019).

6.1 study design

We used purposive and snowball sampling to recruit 15 participants (9 UX designers, 6 Prod- uct Designers). Our recruitment criteria for participation in this study required participants to have at least one year of prior UI prototyping experience. All the 15 participants (F=8, M=7) were 27.13±3.51 (23 - 35) years old and had 2.63±1.14 (1 - 5) years of prior prototyping experience (Figure 6.1). These participants had not previously participated in the formative study. Participants were compensated for their participation.

4-5 years 13.3% 1-2 years Product 26.7% Designers

UX Designers 3-4 years 33.3% 2-3 years 26.7%

(a) Occupation (b) Prior Experience

Figure 6.1: Demographics and prior experience of participants of the usability study of Eve.

53 54 usability evaluation of eve

The study took place in a quiet room. Each participant was provided with Microso Surface Studio and a stylus to create their prototypes using Eve. The study was conducted by one primary facilitator and one secondary facilitator (note-taker). As part of the usability evaluation, participants were asked to create the UI design and behavior of a shopping application for Android as a lo-fi, me-fi, and hi-fi prototype using Eve. We began the study by introducing the purpose of the study and requested the partici- pants to provide informed consent and demographic information. We then introduced the participants to the task. Participants created their prototypes in a lab setup and provided feedback using a think-aloud protocol. Once the participants had finished their task, they were asked to fill out the SUS questionnaire. Each evaluation took ~60 minutes.

6.2 results

Eve scored an average of 89.5 points out of 100, which denotes a very high level of usability. Overall, Eve was perceived as an easy to use and beneficial tool to assist UI/UX designers during their UI prototyping process. Figure 6.2 shows the mean responses to each part of the SUS questionnaire.

Support

Knowledge

Strongly Strongly Disgree Agree

Figure 6.2: SUS mean responses for Eve

We further analyzed the responses to each part of the questionnaire. 6.2 results 55

Frequency of Use

Since most participants are UI/UX designers, they indicated a willingness to use Eve fre- quently during lo-fi, me-fi, and hi-fi prototyping. Nine participants (60%) strongly agreed, and six participants (40%) agreed that they would like to use this system frequently (Figure 6.3).

Figure 6.3: Participants’ preference for frequency of using Eve according to SUS.

Complexity

No respondents found the design, interactions, navigation, and content of Eve unnecessarily complicated. Twelve participants (80%) strongly disagreed, and three participants (20%) disagreed that they found the system unnecessarily complex (Figure 6.4).

Figure 6.4: Participants’ perception of complexity of Eve according to SUS. 56 usability evaluation of eve

Ease of Use

All the participants considered Eve easy to use. Thirteen participants (86.7%) strongly agreed and two participants (13.3%) agreed that the system was easy to use (Figure 6.5).

Figure 6.5: Participants’ perception of ease of using Eve according to SUS.

Need of Technical Support

Almost all participants were confident that they would not need any technical assistance while using Eve. Nine participants (60.0%) strongly disagreed, and five participants (33.3%) disagreed that they would need the support of a technical person to be able to use this system. However, one participant (6.7%) was neutral about the statement (Figure 6.6).

Figure 6.6: Participants’ perception of need of any technical support while using Eve according to SUS. 6.2 results 57

Integrity

Five participants (33.3%) strongly agreed, and ten participants (66.7%) agreed that the various features in Eve were well integrated. There were no negative or neutral responses regarding this statement (Figure 6.7).

Figure 6.7: Participants’ perception of how well integrated Eve is according to SUS.

Inconsistency

A majority of participants (14 out of 15) disagreed that there was too much inconsistency in the system. Six participants (40%) strongly disagreed, and eight participants (53.3%) disagreed with the statement. However, one participant (6.67%) gave a neutral response (Figure 6.8).

Figure 6.8: Participants’ perception of design inconsistencies in Eve according to SUS. 58 usability evaluation of eve

Ease of Learning

All of the participants agreed that most people would learn to use this system very quickly. Thirteen participants (86.7%) strongly agreed, and two participants (13.3%) agreed with the statement (Figure 6.9).

Figure 6.9: Participants’ perception of ease of learning Eve according to SUS.

Difficulty of Use

All of the responses reflect that participants did not find Eve very cumbersome to use. Eleven participants (73.3%) strongly disagreed, and four participants (26.7%) disagreed with the statement (Figure 6.10).

Figure 6.10: Participants’ perception of difficulty of using Eve according to SUS. 6.2 results 59

Confidence in Use

Almost all the participants felt confident in using Eve for lo-fi, me-fi, and hi-fi prototyping. Six participants (40%) strongly agreed and eight participants (53.3% ) agreed with the statement (Figure 6.11). However, one participant (6.7%) was neutral regarding their confidence in using Eve.

Figure 6.11: Participants’ perception of their confidence in using Eve according to SUS.

Need of Prior Knowledge & Experience

Since our participants had prior experience in UI prototyping, they indicated that they did not need to learn a lot of things before they could get going with the system. Eleven participants (73.3%) strongly disagreed, and four participants (26.7%) disagreed with the statement (Figure 6.12).

Figure 6.12: Participants’ perception of the need of prior knowledge and expertise in using Eve according to SUS. 60 usability evaluation of eve

6.3 summary

Overall, participants found Eve easy to use, and they showed an inclination towards using Eve frequently to create UI prototypes. Since all the participants had prior knowledge and experience regarding UI prototyping, none thought they required to learn anything before using Eve. They found various features of Eve well-integrated and did not find the system to be unnecessarily complicated. They felt confident in using Eve, and therefore, they did not require any technical support while using Eve. 7 WORKLOADEVALUATIONOFEVE

We furthered our research by investigating traditional UI prototyping from the perspective of subjective workload. Here, workload refers to the perceived level of physical and cognitive burden experienced by the UI/UX designers during UI prototyping (Gore, 2010). Considering the artistic and subjective nature of UI prototyping, we followed the subjective numerical measurement technique using NASA Task Load Index (NASA-TLX) (Gore, 2010; S. G. Hart et al., 1988) to evaluate the subjective workload experienced during UI prototyping.

7.1 rationale for study

This study is the first attempt to quantitatively measure and compare the subjective workload experienced by UI/UX designers following the traditional approach versus the compre- hensive approach (Eve) for UI prototyping. Here, the term traditional approach denotes the usual UI prototyping tools and techniques used by UI/UX designers in practice. Whereas, the comprehensive approach denotes UI prototyping using the comprehensive technological support provided by Eve. To summarize, this study shall (i) help understand the subjective workload experienced by UI/UX designers during UI prototyping and (ii) provide a first attempt at evaluating the impact of using the comprehensive approach (Eve) on the workload of UI prototyping.

7.2 null hypothesis

We formulated our null hypothesis structured in terms of the subjective workload experi- enced by UI/UX designers during UI prototyping.

H0 There is no difference between the subjective workload of UI prototyping using the traditional approach and comprehensive approach (Eve).

61 62 workload evaluation of eve

The study designed to test our hypothesis is explained in the following sections.

7.3 participants

In total, 32 participants (16 male, 16 female) took part in the workload study. Participants were 30.5±7.5 (23 - 38) years old, had 2.86±1.09 (1 - 5) years of prior prototyping experience, and were a mix of 18 UX designers (56.3%), and 14 Product designers (43.8%) (Figure 7.1). We chose these 32 participants from the 45 participants of the Formative Study, who agreed to participate in further studies. We ensured that none of these participants were previously a part of the Usability Study. As a prerequisite, we made sure that none of the participants had used Eve for UI prototyping previously. All participants were compensated for their participation.

4-5 years 1-2 years 18.8% 12.5% Product Designers UX Designers 3-4 years 25.0% 2-3 years 43.8%

(a) Occupation (b) Prior Experience

Figure 7.1: Demographics and prior experience of participants of the workload study on traditional UI prototyping.

7.4 study design

The study consisted of two groups of participants (Experimental, Control). Participants were randomly assigned to the experimental or control group with the constraint that each group had an equal distribution of participants based on their gender and prior prototyping experience. Therefore, we carefully selected 32 participants out of the 45 participants of the 7.4 study design 63

4-5 years 1-2 years 1-2 years 4-5 years 18.8% 12.5% 18.8% 25.0%

3-4 years 25.0% 2-3 years 2-3 years 43.8% 3-4 years 31.3% 25.0%

(a) Experimental (Eve) (b) Control

Figure 7.2: Distribution of participants based on their prior experience for the workload study on traditional prototyping.

Formative Study (Study 1) to ensure the equal distribution of gender and prior experience in both groups (Figure 7.2). The experiment group contained ne=16 participants (F=8, M=8) with 2.84 ± 1.06 (1 - 5) years of prior prototyping experience. Similarly, the control group contained nc=16 participants (F=8, M=8) with 2.88 ± 1.15 (1 - 5) years of prior prototyping experience. We compiled a list of eight distinct task categories (Appendix b). Each task category con- sists of three features. Participants belonging to both groups were assigned a random task category. In total, participants had eight hours to create all three fidelities of UI prototype (lo-fi, me-fi, hi-fi) based on the assigned task category. Within these eight hours, participants had no time constraint regarding creating each fidelity. They built their UI prototypes in lab setup. During these eight hours, if the participants had any questions regarding the study, they could ask the moderator to clarify. The experimental group had to use Eve to experience the comprehensive approach to create their UI prototypes. In contrast, the control group had the independence to use any UI prototyping tools of choice to follow the traditional approach of prototyping. The control group was restricted from using Eve during UI prototyping. In both cases, the UI design decisions were le solely up to the participants. 64 workload evaluation of eve

7.5 apparatus

The study was performed in a quiet room with a whiteboard. Participants were provided with a table and a comfortable chair.

Control Group

Participants from the control group were provided with the prototyping materials and prototyping tools that we identified during our Formative Study (Table 3.1, 3.2, 3.3). In case they required something else, they could ask the moderator. This happened twice in our study. Once, when a participant asked for a ruler to create lo-fi prototype and another time, when another participant asked for color pencils. Additionally, they were provided with a Surface Studio 2, to create their prototypes. Prior to the study, we had already installed all the commonly used UI prototyping tools (Table 3.1, 3.2, 3.3) in the Surface Studio 2. Participants were informed regarding the installed prototyping tools. They were also given the freedom to install any new tools, if need be. No participants installed a new tool, however, a few participants used different web applications during their task. The details of the tools used and their purpose are summarized in the Results section.

Experimental Group

For all three fidelities, the participants from the experimental group were provided with a Surface Studio 2. Prior to the study, we had installed Eve in the Surface Studio 2. We made sure to uninstall any other UI prototyping tools present in the Surface Studio 2. We also closely monitored that participants from the experimental group do not use any UI prototyping tools other than Eve.

7.6 measurements

Between the experimental and control group, we analyzed the variations in physical de- mand, mental demand, temporal demand, performance, effort, frustration, and the overall 7.7 task categories 65 subjective workload experienced during UI prototyping using the NASA-TLX questionnaire (Pandian and Sarah. Suleri, 2020). Additionally, we recorded the number of UI elements sketched by the participants during lo-fi prototyping and summarized the number of UI elements correctly identified, wrongly identified, and unidentified by Eve. As a result, we calculated the UI element detection accuracy, precision, and recall.

7.7 task categories

Participants from both experimental and control group were randomly assigned one of the eight task categories to prototype an Android smartphone application: shopping, booking, food, music, news, photos, social media, and weather (Appendix b).

ID Study1 Gender Experience Task ID Study1 Gender Experience Task ID (years) ID (years)

S3C01 S1P36 Female 2 Shopping S3E01 S1P35 Female 2.5 Shopping S3C02 S1P03 Male 1.5 Booking S3E02 S1P41 Female 5 Booking S3C03 S1P04 Female 3 Food S3E03 S1P38 Male 1.5 Food S3C04 S1P06 Female 2.5 Music S3E04 S1P31 Male 2 Music S3C05 S1P14 Female 1.5 News S3E05 S1P02 Male 2.5 News S3C06 S1P20 Male 4.5 Photos S3E06 S1P07 Female 3 Photos

S3C07 S1P17 Male 3 Social Media S3E07 S1P40 Male 4.5 Social Media S3C08 S1P01 Female 4 Weather S3E08 S1P11 Male 2 Weather S3C09 S1P30 Male 3.5 Shopping S3E09 S1P19 Female 3.5 Shopping S3C10 S1P21 Female 4 Booking S3E10 S1P27 Female 3 Booking Experimental (Eve) Control (Traditional) S3C11 S1P18 Male 1 Food S3E11 S1P32 Female 1 Food S3C12 S1P42 Male 2.5 Music S3E12 S1P22 Male 2.5 Music S3C13 S1P26 Female 2.5 News S3E13 S1P37 Male 3.5 News S3C14 S1P10 Male 3.5 Photos S3E14 S1P15 Female 2.5 Photos

S3C15 S1P08 Male 5 Social Media S3E15 S1P45 Female 2.5 Social Media S3C16 S1P25 Female 2 Weather S3E16 S1P33 Male 4 Weather

Table 7.1: Participant details and task allocation for workload analysis of Control Group (Traditional) and Experimental Group (Eve) 66 workload evaluation of eve

7.8 procedure

Aer a brief introduction to the workload study, participants were asked to provide informed consent and demographic information. Before we introduced the participants with the assigned task categories, they were asked to complete a reference task such as a mental calculation, and then assess its workload using the NASA-TLX questionnaire. Reference tasks help decrease the between-groups variability by better calibrating participants with the various dimensions of NASA-TLX (Gore, 2010; S. G. Hart et al., 1988). We used an NASA-TLX (Pandian and Sarah. Suleri, 2020) to capture the participant’s workload. Before starting each task, we gave a verbal description of the assigned task category and its features. In addition, the participants belonging to the experimental group were also given a thorough introduction to all the features of Eve and the intelligent approach of fidelity transformation it offers. Participants were given time to get acquainted with Eve and try it out in advance. We answered any questions asked by the participants. Once the participants were comfortable with using Eve, they proceeded with creating their prototypes according to the assigned task category. The participants belonging to the control group were restricted from using the comprehensive approach (Eve) to prototype the assigned task category. They could use any existing UI prototyping tools and techniques to create their UI prototypes. Once the participants were clear about the instructions, they were given eight hours to create all three fidelities of a UI prototype according to the task category and UI prototyping approach assigned to them. While the participants were performing their task, the mod- erator made notes based on the qualitative comments and observations regarding each fidelity. As soon as the participants were finished with a certain prototyping fidelity, they were in- vited for follow-up interviews in a semi-structure format (~10 min) to share their qualitative feedback and to complete the NASA-TLX questionnaire regarding their overall experi- ence during creating that fidelity. One primary interviewer and one secondary interviewer (note-taker) conducted these interviews. The interviews were audio recorded and later, transcribed. During these interviews, we also collected the UI prototypes created by the participants. Aer the follow-up interview was finished, participants could continue creating the next fidelity. Once the participants had created all three fidelities of the UI prototype, they were 7.9 analysis 67 asked to evaluate the overall workload experienced during the entire UI prototyping process using NASA-TLX.

7.9 analysis

We collected data from two independent groups of participants (Experimental, Control) using the NASA-TLX questionnaire. NASA-TLX uses an ordinal scale to capture subjective values from participants. The variance of data collected using a comprehensive approach (Eve) and traditional approach is non-homogeneous. Therefore, to test for significant differ- ences in the individual dimensions, we used the Mann Whitney U two tailed test (Mann et al., 1947) (level of significance < 0.05) for paired samples. Unlike the ordinal values of subjective workload (NASA-TLX), the UI element detection accuracy, precision, and recall are in interval scale. Therefore, we converted the accuracy, precision, and recall to ordinal scale using average ranking and used Spearman’s rank correlation coefficient (Spearman, 1904) to measure correlation between them. We also collected qualitative feedback from participants during follow-up interviews. To analyze this data, we followed the inductive analysis approach of the Grounded Theory methodology (Strauss et al., 1994). Using the open-coding approach, we developed a coding scheme based on our initial observations. Two coders independently coded four transcripts (two transcript per group) to refine the coding scheme. For further discussion, we used the affinity mapping technique to arrange the coding themes. Next, we iteratively checked another two transcripts individually. Aer a few iterations, both coders reached a substantial level of agreement (Cohen’s kappa, κ=0.74). In the following sub-sections, we provide an analysis of the collected data, structured in terms of subjective workload, physical demand, mental demand, temporal demand, performance, effort, and frustration.

7.10 results & discussion

Overall, the average subjective workload experienced by participants using comprehensive approach (Eve) (M=40.5, SD=22.59) was significantly less than the average subjective work- load experienced using traditional approach (M=64.04, SD=10.18) (Figure 7.3, 7.4 & Table 7.2). 68 workload evaluation of eve

80

60

40 Average Workload 20

0

Traditional Eve Prototyping

Figure 7.3: Comparison of average workload experienced by participants of workload study during the entire process of prototyping using traditional versus comprehensive approach (Eve).

This difference was statistically significant (U=40.5, nc=ne=16, p=0.00052) and hence, H0 is rejected. The qualitative feedback from the participants of the experimental group revealed that two factors played a significant role in decreasing the overall workload of UI prototyping.

1. All-in-one solution: Participants from the control group used an average of four tools (nmin=3, nmax=6) to create their UI prototype. On the contrary, participants from the experimental group reported that it was convenient for them to create and manage their prototypes using only one tool. They also pointed out that Eve would be useful for them in managing prototypes for different projects instead of keeping track of their UI designs in multiple tools. (Experimental group, n=13,81.25%)

I didn’t like that I had to learn a new tool, but I like the fact that I just had to learn one tool, and it did pretty much everything I needed. I usually use around four to five tools just to make one prototype... I liked that I could make the prototype and “ share it using the same tool. - S3E12 7.10 results & discussion 69

Prototyping 300 Traditional Eve

250

200

150 Adjusted Rating 100

50

0 Physical Demand Mental Demand Temporal Demand Performance Effort Frustration Level Measurements

Figure 7.4: Comparison of physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload analysis during the entire process of prototyping using traditional approach versus comprehensive approach (Eve).

2. Automation of fidelity transformation: Participants reported that as they progressed from lo-fi to higher fidelities, they already had a starting point, and they did not need to start from a blank canvas with every fidelity. (Experimental group, n=11, 68.75%)

Oh that (fidelity transformation) was heaven. I had to do way less. So I could actually spend more time on polishing the design than making it again from the blank screen. “ - S3E06

Physical Demand

Overall, the average physical demand for comprehensive approach (Eve) (M=45.62, SD=71.92) was half the average physical demand of the traditional approach (M=90.62, SD=104.02). However, this difference was not statistically significant (U=88.5, nc=ne=16, p=0.06384). The qualitative feedback revealed that the physical demand of UI prototyping using the traditional approach largely depended on the manual labour involved in creating all three fidelities. A few of these participants (n=6, 37.5%) did not consider this manual work strenuous. On further investigation, we found out that this opinion does not depend on their prior UI prototyping experience in years (M=2.67, SD=1.4) as they are evenly distributed. 70 workload evaluation of eve

Experimental (Eve) Control (Traditional) 31 0480 4 0 20 1 S3E10 0 S3E06 60 1 60 70 S3C14 0 1 0 70 30 S3C08 S3C07 31 000 0 20 S3E12 31 000 0 50 S3E16 0 0 0 0 140 10 0 S3E09 2 0 S3E08 70 0 S3E05 10 S3E02 140 20 0 2 1 50 0 70 20 S3C13 1 40 S3C12 360 70 S3C11 50 150 4 S3C09 1 0 3 90 70 S3C06 0 50 S3C05 S3C04 50 S3C02 31 02140 0 20 2 SD 0 Mean 240 1 70 30 S3E15 4 20 S3E14 30 S3E13 60 S3E11 1 0 80 30 0 0 S3E07 2 70 0 40 S3E04 60 S3E03 0 20 1 S3E01 1 SD 60 Mean 20 S3C16 210 S3C15 3 70 S3C10 0 240 0 3 70 S3C03 80 S3C01 ID cl egtAdjusted Weight Scale 62 .890.62 1.38 56.25 12 .645.62 1.06 31.25 06 .6104.02 1.26 20.62 57 .971.92 1.39 25.79 10 5 (500) (5) (100) hsclDemand Physical Rating T able cl egtAdjusted Weight Scale 43 .4256.25 3.44 74.38 81 .9104.38 2.69 38.12 16 .077.89 1.40 21.67 15 .678.05 0.96 11.53 10 5 (500) (5) (100) 04360 4 360 90 270 4 3 90 90 0160 120 1 2 60 60 240 3 180 240 80 240 3 3 3 60 240 80 60 80 3 1 240 80 60 3 80 000 0 20 0 0 20 03150 150 3 3 50 50 200 4 50 04160 4 40 120 3 120 40 3 40 160 4 40 05150 5 30 04280 4 70 140 2 280 70 350 4 350 5 70 5 70 70 210 280 3 4 70 70 0330 30 3 3 10 10 0 3 0 7.2: etlDemand Mental aacletduigNS-L o h niepoeso prototyping. of process entire the for NASA-TLX using collected Data Rating cl egtAdjusted Weight Scale 83 .580.91 1.15 28.34 81 .455.00 1.44 28.12 31 .4181.88 2.44 73.12 44 .6105.91 1.26 14.48 10 5 (500) (5) (100) 0 300 3 100 0190 1 90 180 180 180 2 450 2 2 90 5 90 90 90 02120 2 60 320 4 80 180 120 3 60 2 160 60 1 60 120 2 180 60 2 80 3 60 60 240 0 4 0 60 60 0240 2 20 20 1 20 20 1 20 0150 1 50 03120 3 40 0130 1 30 03210 3 280 70 4 70 70 1 70 0330 10 3 1 20 10 10 10 2 1 10 10 0 0 0 0 0 0 0 0 0 eprlDemand Temporal Rating cl egtAdjusted Weight Scale 93 .5154.38 3.75 39.38 79 .8129.87 1.18 27.92 10 5 (500) (5) (100) 71 .728.28 1.77 17.13 0020 30.0 2.06 20.0 05450 5 90 05300 5 60 320 4 80 320 4 80 60 60 1 1 60 60 05100 5 20 40 2 20 0 0 0 0 20 40 0 20 2 0 20 20 100 5 20 04160 4 40 04120 120 4 4 30 30 90 3 90 30 3 90 30 3 30 30 1 30 03210 3 70 0110 50 1 5 10 10 40 10 4 1 10 10 0 30 40 0 3 30 4 10 40 10 3 10 4 10 10 0 5 0 0 4 0 Performance Rating cl egtAdjusted Weight Scale 53 .896.09 1.18 25.36 81 .5127.50 3.25 38.12 31 . 265.0 3.5 73.12 10 5 (500) (5) (100) 77 .2113.02 0.82 17.78 0 500 5 100 03270 3 90 450 270 5 3 90 360 90 4 90 03240 3 80 180 240 3 3 60 320 320 80 4 4 80 80 240 320 3 4 80 80 0360 3 20 100 5 20 04200 4 50 250 5 50 100 2 50 0280 2 40 200 5 40 80 2 40 120 3 40 120 3 40 0260 2 90 30 3 90 30 3 30 04280 4 70 210 280 3 210 4 70 3 70 70 0330 3 10 10 1 10 0 4 0 Effort Rating cl egtAdjusted Weight Scale 56 .9136.88 2.19 55.62 06 .1120.62 2.81 40.62 81 .7109.57 1.47 28.16 10 5 (500) (5) (100) 93 .4127.16 1.64 19.31 0 200 2 100 05450 5 90 0160 1 60 240 4 400 60 5 80 240 4 60 180 3 60 60 1 240 60 60 60 3 1 1 80 60 60 0360 100 3 5 20 20 60 3 20 0 0 20 04200 4 200 50 4 50 200 4 50 0140 1 40 80 2 40 120 0 3 0 40 40 0260 2 30 30 1 30 03210 3 70 210 3 280 70 4 0 70 0 70 0220 0 2 40 0 10 4 20 10 10 2 10 rsrto Level Frustration Rating Workload 64.04 22.59 55.33 55.33 91.33 86.67 66.67 26.67 31.33 61.33 41.33 68.67 46.67 44.67 54.67 48.67 71.33 79.33 57.33 76.67 67.33 (100) 10.18 40.5 1.33 66 66 22 36 32 60 50 40 62 62 34 10 7.10 results & discussion 71

Participants who used Eve had mixed opinions regarding the physical demand they expe- rienced. A few of them (n=11, 68.75%) reported that automation of fidelity transformation helped in reducing the manual work to almost nothing. However, the remaining few par- ticipants (n=5, 31.25%) felt they still had to do substantial manual work to create their UI prototypes. On further investigation, we found out that the average prior experience of these participants is 2.1 years (1-3 years). The mean prior experience does not necessarily rank these five participants as novices but they are on the lesser side of the average 2.5 years (1-5 years). We speculate that their subjective perception of physical demand could be high due to their less prior experience of UI prototyping. However, there is no/weak correlation between the physical demand and the accuracy (rs=-0.11), precision (rs=0.27) of the UI element detection during fidelity transformation.

Mental Demand

Overall, the average mental demand for comprehensive approach (Eve) (M=104.38, SD=77.89) was less than half the average mental demand for traditional approach (M=256.25, SD=78.05). This difference is statistically significant (U=20.5, nc=ne=16, p=0.00003). The participants using the comprehensive approach (Eve) reported that the visibility and clarity of features decreased their mental load during the task. Also, there is a moderate negative correlation between the mental demand and the accuracy (rs=-0.43), precision (rs=-0.61) of the UI element detection during fidelity transformation.

Temporal Demand

The average temporal demand of comprehensive approach (Eve) (M=55, SD=80.91) was less than one-third the average temporal demand for the traditional approach (M=181.88, SD=105.91). This difference is statistically significant (U=36.5, nc=ne=16, p=0.00029). Our participants explained that deciding on UI design details such as design layout, colour palettes, and fonts families take up quite a lot of time during traditional prototyping, but this is not the case following the comprehensive approach (Eve). Since they could try out multiple palettes and make application-wide design changes quickly. 72 workload evaluation of eve

Upon further investigation, we found out that there is a weak negative correlation between the temporal demand and the accuracy (rs=-0.36), precision (rs=-0.25) of the UI element detection during fidelity transformation.

Performance

The subjective perception of the performance of participants using the comprehensive approach (Eve) (M=154.38, SD=129.87) was almost five times higher than the subjectively perceived performance of the traditional approach (M=30, SD=28.28). This difference is statistically significant (U=35.5, nc=ne=16, p=0.00024). Overall, the participants expressed that they felt confident in their designs following the comprehensive approach (Eve).

I like the fact that I could make really neat and clean UI designs. I could actually create what I thought the design could be. The plus side is that I only used one tool and it was pretty quick. “ - S3E04

However, their perceived performance has a weak negative correlation with the accuracy (rs=-0.31), and precision (rs=-0.25) of the UI element detection during fidelity transformation.

Effort

The average effort experienced by participants using the comprehensive approach (Eve) (M=127.5, SD=96.09) was less than half the average effort experienced by participants dur- ing the traditional approach (M=265, SD=113.02). This difference is statistically significant (U=41.5, nc=ne=16, p=0.00058). Participants following the traditional approach expressed that they have to start the design process from scratch, which takes much effort. On the other hand, participants following the comprehensive approach (Eve) reported a significant decrease in effort due to the ease with which they could build on top of the automatically generated me-fi and hi-fi designs. However, their perceived effort has a weak negative correlation with the accuracy (rs=-0.28), and precision (rs=-0.32) of the UI element detection during fidelity transformation. 7.10 results & discussion 73

Frustration

The average frustration experienced using the comprehensive approach (Eve) (M=120.62, SD=109.57) was a little less than the average frustration experienced using the traditional approach (M=136.88, SD=127.16). However, this difference is not statistically significant (U=120.5, nc=ne=16, p=0.39518). The qualitative feedback revealed that participants felt that maintaining all fidelities of a UI prototype within one tool reduced their frustration of keeping a track of everything in different tools. Upon querying the participants regarding the newness of Eve, they reported that it was easy to learn and did not add to their frustration level. These findings are also inline with our ease of use and learnability rating from the usability study. Also, their frustration level has a weak negative correlation with the accuracy (rs=-0.29), and no correlation with precision (rs=-0.06) of the UI element detection during fidelity transformation. Therefore, we infer that Eve did not impact the frustration in positive or negative direction.

(a) (b)

Figure 7.5: Comparison of average workload, physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload study during the lo-fi prototyping using traditional versus comprehensive approach (Eve).

7.10.1 Lo-Fi

During lo-fi prototyping, the average subjective workload experienced by participants using comprehensive approach (Eve) (M=16.25, SD=13.47) was less than one-third the average subjective workload experienced using traditional approach (M=52.37, SD=15.39). This dif- 74 workload evaluation of eve

ference is statistically significant (U=13.00, nc=ne=16, p=0.000008) (Figure 7.5, Appendix c.1). Participants of control group used paper & pen (n=10, 62.5%), and whiteboard & pen (n=6, 37.5%) techniques to reify their lo-fi designs (Figure 7.6a). Whereas, the participants of experimental group used Eve to sketch their UI designs (Figure 7.6b). Qualitative feedback revealed that participants using Eve made use of editing and control features to undo, redo, cut, copy, paste, and duplicate screens. As a result, their prototyping experience was less stressful, and they could finish their tasks quickly (n=13, 81.25%).

It helps me in doing things again and again. I liked that. I had to make one thing once and use it again if I need to. Especially in lo-fi, I feel helpless when I can’t undo. On paper, if I have to change one thing on one screen, I have to change it again on all “ screens. That was automatic here (Eve). - S3E14

(a) (b)

Figure 7.6: Lo-fi designs created by participants of (a) Control group (traditional approach) (b) Experimental group (Eve) 7.10 results & discussion 75

(a) (b)

Figure 7.7: Comparison of average workload, physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload study during the me-fi prototyping using traditional versus comprehensive approach (Eve).

7.10.2 Me-Fi

During me-fi prototyping, the average subjective workload experienced by participants using comprehensive approach (Eve) (M=42.67, SD=23.88) was significantly less than the average subjective workload experienced using traditional approach (M=57.58, SD=12.66). This difference is statistically significant (U=79.50, nc=ne=16, p=0.03519) (Figure 7.7, Appendix c.2) Participants following the traditional approach reported using Sketch app (Sketch) (n=5, 31.25%), Figma (Figma) (n=6, 37.5%), Adobe XD (Adobe XD) (n=2, 12.5%), and Adobe Illustra- tor (Adobe Illustrator) (n=3, 18.75%) to design me-fi UIs (Figure 7.8a). Participants also used Invision (InVision Studio) to create interactions (n=5, 31.25%). They further used Iconify (iconify on Iconfinder) (n=6, 37.5%) and FlatIcon (Flat Icon) (n=4, 25%) to search for different icons. The rest used the built-in icon collections of their tools (n=6, 37.5%). They used Figma (n=3, 18.75%), Adobe Photoshop (n=5, 31.26%), Sketch app (n=2, 12.5%) and Adobe Illustrator (n=6, 37.5%) to customize the downloaded icons and to create illustrations. They used online tools such Coolors (Coolors.co) (n=7, 43.75%), Color Hunt (Color Hunt) (n=6, 37.5%), and Palet- ton (Paletton) (n=3, 18.75%) to generate color palettes for their me-fi designs. In order to add images to their designs, they used online collections of royalty free images from Unsplash (Unsplash) (n=8, 50%), Pexels (Pexels) (n=6, 37.5%), and FreeImages (FreeImages.com) (n=2, 12.5%). Participants belonging to the experimental group utilized the built in collection of Eve for color palettes, icons, illustrations and images to create their me-fi designs (Figure 76 workload evaluation of eve

7.8b). Some participants used Eve to create their custom color palettes (n=9, 56.25%). A few participants also customised icons (n=6, 37.5%) from Eve’s icon collection for their me-fi designs.

(a) (b)

Figure 7.8: Me-fi designs created by participants of (a) Control group (traditional approach) (b) Experimental group (Eve)

7.10.3 Hi-Fi

During hi-fi prototyping, the average subjective workload experienced by participants using comprehensive approach (Eve) (M=40.12, SD=21.33) was significantly less than the average subjective workload experienced using traditional approach (M=67.04, SD=16.83). This difference is statistically significant (U=46.5, nc=ne=16, p=0.00113) (Figure 7.9, Appendix c.3) Some participants from the control group used the same prototyping tools as they used in me-fi to further enhance it into a hi-fi prototype (n=9, 56.25%) (Figure 7.10a). Others used Android studio (Android Studio) (n=4, 25%), and Neonto Studio (Neonto Studio) (n=3, 18.75%) to code the front-end of the hi-fi prototype. The back-end was not implemented. Almost 7.10 results & discussion 77

(a) (b)

Figure 7.9: Comparison of average workload, physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload study during the hi-fi prototyping using traditional versus comprehensive approach (Eve). all of them utilized the same icons, color palettes, and images they had used during me-fi (n=12, 75%). A few of them modified the color palettes and images in hi-fi (n=4, 25%). Participants belonging to experimental group enhanced their me-fi designs into hi-fi using Eve (Figure 7.10b). Some participants additionally exported the code and executed it to preview the hi-fi design using Android Studio (n=7, 43.75%).

(a) (b)

Figure 7.10: Hi-fi designs created by participants of (a) Control group (traditional approach) (b) Experimental group (Eve) 78 workload evaluation of eve

7.10.4 Fidelity Transformation

We observed a moderate negative correlation between the workload experienced during me-fi and the UI element detection accuracy (rs=-0.47), and precision (rs=-0.55). Also, there is a weak negative correlation between the experienced me-fi workload and recall (rs=- 0.37). This implies that participants that experienced high accuracy, precision and recall generally experienced less workload during me-fi prototyping (Figure 7.11, Table 7.3). This observation, of course, has a few exceptions.

(a) Precision & Workload (b) Accuracy & Workload (c) Recall & Workload

Figure 7.11: Correlation between subjective workload and UI element detection accuracy, precision, recall experienced by participants during the workload analysis.

This relation between workload, detection accuracy and precision was also reflected in qualitative feedback collected from participants for me-fi prototyping.

The more element got detected, the less work I had to do, which was good. At first, when I had to check if everything was ok. It was strange because it was new, I guess. But once I got used to it, it helped. “ - S3E09

It’s nice, but the problem is when it misses things or even if it detects them wrong. Because if I drew a lot of elements in a screen, it takes a lot of effort to check them. “ - S3E11 7.11 summary 79

ID Screens Drawn Correctly Unidentified Wrongly Accuracy Precision Recall elements identified elements

S3E01 2 21 17 3 5 0.68 0.77 0.85 S3E02 4 38 34 3 6 0.79 0.85 0.92 S3E03 4 32 27 4 3 0.79 0.90 0.87 S3E04 2 18 16 1 3 0.80 0.84 0.94 S3E05 2 15 12 3 2 0.71 0.86 0.80 S3E06 3 19 18 1 3 0.82 0.86 0.95 S3E07 4 24 20 2 3 0.80 0.87 0.91 S3E08 2 29 24 2 5 0.77 0.83 0.92 S3E09 4 45 39 4 5 0.81 0.89 0.91 S3E10 3 34 28 3 4 0.80 0.88 0.90 S3E11 3 32 27 3 6 0.75 0.82 0.90 S3E12 3 26 21 3 6 0.70 0.78 0.88 S3E13 2 15 13 2 2 0.76 0.87 0.87 S3E14 3 21 18 1 4 0.78 0.82 0.95 S3E15 3 20 17 2 4 0.74 0.81 0.89 S3E16 2 10 8 2 3 0.62 0.73 0.80

Mean 2.93 25.20 21.47 2.40 3.93 0.76 0.84 0.89 SD 0.80 9.70 8.43 0.99 1.39 0.05 0.04 0.05

Table 7.3: The number of UI elements the participants sketched, the number of elements correctly identified, wrongly identified and the number of unidentified UI elements during Work- load Study.

7.11 summary

The results of the workload study indicate that subjective workload experienced by UI/UX designers using the comprehensive approach offered by Eve is significantly less than the tra- ditional UI prototyping approach. The comprehensive support provided by Eve eliminates the need for switching between various UI prototyping tools while progressing through lo-fi, me-fi, and hi-fi. Consequently, using Eve resulted in a notable reduction in mental demand, temporal demand, and effort experienced by UI/UX designers. The overall performance increased five times using the comprehensive prototyping approach offered by Eve. The automation of fidelity transformation is another factor that played a vital role in decreasing the workload of UI prototyping. The accuracy, precision and recall of UI element detection using lo-fi sketches has a moderate negative correlation with the subjective workload experienced during me-fi prototyping.

8 LIMITATIONSANDFUTUREWORK

Besides the promising results of usability and workload analysis described in the previous chapters, our research on providing technological support to traditional UI prototyping using Eve has some limitations that we aim to address in our future work. As mentioned earlier, the workload experienced by UI/UX designers during me-fi pro- totyping is influenced by the accuracy and precision of UI element detection. Currently, Eve uses MetaMorph to detect 21 UI elements. If designers sketch something other than these 21 UI elements, it either remains undetected or is identified wrongly. In the future, we plan to enhance this list of UI elements and improve UI element detection accuracy and precision. In the worst-case scenario, if most of the UI elements are wrongly detected or undetected, designers will have to either change them manually or clear the canvas and start from scratch. Currently, if this situation happens, designers can manipulate the detection results during me-fi. In the future, we would like to investigate UI element detection at the time of sketching. We further plan to compare the impact of providing designers feedback regarding detection during lo-fi sketching or once they switch to me-fi. The UI design changes made in lo-fi are reflected in higher fidelities. However, if a designer at a later stage changes something in higher fidelities, it is not reflected in lower fidelities. In the future, we would like to investigate the need and impact of backward propagation of UI design changes.

81

Part II Rapid Prototyping

Synopsis

Agile development requires lean UX designers to perform rapid prototyping and swift evaluations with experts and end-users to ensure quick project releases. We conducted our formative study with 15 lean UX designers to understand their everyday workflow and challenges during rapid prototyping. Our results revealed that rapid prototyping becomes tedious for lean UX designers due to tight deadlines, widespread UI design knowledge, and developers' inability to reproduce the same UI design quality. To address these concerns, we introduced a UI design pattern-driven approach for the rapid prototyping of smartphone applications. To realize this approach, we introduced Kiwi, UI design patterns, and guidelines library to consolidate the UI design knowledge for smartphone applications. Besides providing a problem statement (what), the rationale (why), context (where), and a solution (how) for each UI design pattern, Kiwi also provides GUI examples and layout blueprints. As per the usability evaluation using SUS, Kiwi scored an average of 77.6 points out of 100, which implies overall good usability and high learnability.

We further investigated the impact of using UI design pattern-driven approach on the subjective workload of rapid prototyping of smartphone applications. Our workload study using NASA-TLX revealed that the overall subjective workload, physical demand, and effort experienced by lean UX designers using the pattern-driven approach are significantly less than the traditional rapid prototyping approach.

9 FORMATIVESTUDYONRAPID PROTOTYPING

In agile development, lean UX designers perform rapid prototyping and swi evaluations with experts and end-users to ensure quick project releases. We conducted a formative user study using semi-structured interviews to understand the common practices, strategies and pain points of lean UX designers during rapid UI prototyping (Kipi, 2019).

9.1 participants

For our formative study, we recruited 15 participants (F=8, M=7) using purposive and snow- ball sampling. Our recruitment criteria for participation in this study required participants to have at least one year of prior rapid prototyping experience. Participants were 28 ± 9.8 (21-35) years old, had 3.75 ± 1.12 (2 - 4) years of prior experience in rapid UI prototyping (Figure 9.1). Our participants included 7 UX Designers and 8 Product Designers. Participants were compensated for their participation.

UX 3-4 years Product Designers 40.0% Designers 2-3 years 60.0%

(a) Occupation (b) Prior Experience

Figure 9.1: Demographics and prior experience of participants of the formative study on rapid prototyping.

85 86 formative study on rapid prototyping

9.2 procedure

We conducted the semi-structured interviews (~45 min) in the natural environment of the participants. Each interview was audio recorded and later, transcribed. Interviews were conducted by one primary interviewer and one secondary interviewer (note-taker). To begin with, we explained the purpose of the formative study to the participants. We requested them to provide: informed consent, demographics, prior rapid prototyping experience, and willingness to participate in the workload study later. Once this information was collected, participants were asked open-ended questions regarding their common prototyping practices. For example: "During the process of rapid prototyping, what steps do you normally take?". They were also asked to focus on how they prototype in practice, rather than how it is known in theory—since understanding the real-life rapid prototyping practices is much more essential and useful in our case. During these interviews, we also gathered various documents regarding UI designs to analyze the documentation conventions of different lean UX designers.

9.3 analysis

To analyze the data collected from our formative study, we followed the inductive analysis approach with affinity mapping from Grounded Theory methodology (Strauss et al., 1994). Using the open-coding approach, we developed an initial coding scheme based on our initial observations. Two coders independently coded two transcripts to refine the coding scheme. For further discussion, we used the affinity mapping technique to arrange the coding themes. Next, we iteratively checked another three transcripts individually. Aer a few iterations, both coders reached a substantial level of agreement (Cohen’s kappa, κ=0.78).

9.4 key findings

Our analysis of the collected data pointed out a few interesting findings: 9.4 key findings 87

Time Constraint

Lean UX designers oen face tight deadlines for creating and evaluating prototypes rapidly. Our findings indicate that these time constraints of project design and development sched- ules hinder lean UX designers in iterating their designs multiple times. This finding is in line with the insights drawn from the experiment performed by S. P. Dow et al., 2009. Our participants also pointed out that in a situation where they have an ample amount of time to go through multiple design iterations, they are able to try out more design concepts. Similarly, when they have more time to critically analyze their designs, they can discover more flaws and constraints of their UI designs. Whereas, time constraints oen lead lean UX designers to focus more on realizing a design concept rather than iterating on multiple design concepts (Austin et al., 2003; Schrage, 1999). The tight deadlines and non-iterating practices affect their creativity and cause compromise on the quality of their UI designs. Our participants also pointed out that due to the inadequacy of time, oentimes, they spend most of their time in construction and can only speculate on the performance and usability of their designs.

The tight iteration cycles are usually quite stressful. Mostly, we have schedules that have too short deadlines to really have anything substantial. “ - S1P06

Scattered UI Design Knowledge

Our participants reported that depending on the target OS, platform, and design language of the project, they refer to different UI design blogs (LukeW; Mockplus; Nielsen Norman Group: UX Training, Consulting, & Research; Smashing Magazine; UX Booth), design language websites (Airbnb; Atlassian; BBC, 2015; Buffer; Clarity; FitBit; Google, 2020; IBM; iOS; Lightning; Lonely Planet; Mailchimp; Microsoft Design; Oracle; Photon; QuickBooks; Shopify Polaris; Solid; Stack Overflow; Ubuntu; Walmart; Zendesk), and UI design books (Deaton, 2003; Krug, 2018; Donald A Norman, 1988, 1999; Unger et al., 2004; R. Williams, 1993) for guidance. Most projects vary in terms of their target platform and design language; consequently, lean UX designers must gather relevant design information from scratch for each project. Alternatively, they maintain their UI design templates in design repositories to reuse for swily creating their UI prototypes. 88 formative study on rapid prototyping

Our participants pointed out that the UI design guidelines and knowledge is scattered in various academic and commercial sources. Keeping track of these UI design knowledge sources becomes tedious for them.

Its like if maintaining my own library of design books and that is a lot of work. I miss a proper library that has all the stuff. “ - S1P14

Reproducing Same Quality of UI Design

Once the UI designs are mature enough, lean UX designers pass them on to the front-end developers responsible for developing the end-product based on the provided UI prototype. The UI design prototypes are communicated as wireframes and design systems that contain information regarding color palettes, typography, iconography, images, and measurements. Our participants mentioned situations where developers were not able to achieve the same UI design quality during development due to the lack of UI design knowledge. Developers seemed to mainly focus on functionalities and not on aesthetic details. As a result, lean UX designers have to spend extra time and effort in describing and negotiating the design aesthetics and layout of the UI prototype so that the developers could replicate the design details during development.

Working with developers is a pain, they always ignore the UI design guidelines we provide. “ - S1P02

9.5 summary

Rapid prototyping is a widely used method to quickly design interactive prototypes that can be evaluated by UI/UX experts and target end-users. However, rapid prototyping becomes tedious for lean UX designers due to their tight deadlines and noniterative practices, UI design knowledge scattered among numerous resources, and inability of developers to replicate the same quality of UI design provided to them by lean UX designers. 10 UIDESIGNPATTERN-DRIVENRAPID PROTOTYPING

Our formative study on rapid prototyping pointed out a few interesting pain points faced by lean UX designers during rapid prototyping. To address these identified problems, we propose a UI design pattern-driven approach for rapid prototyping (Sarah Suleri, Kipi, et al., 2019). Before delving into the details of this approach and how it aims to address the identified pain points, we would like to establish some background.

10.1 background

A UI Design Pattern is an entity that describes a reoccurring problem in UIs and proposes a solution to that problem. The proposed solution is usually based on knowledge derived through experience (Tidwell, 2010). Capturing design knowledge into design patterns began with Alexandrian patterns (Alexander, 1977). Christopher Alexander proposed the first pattern standard as a format of documenting architectural patterns with consistency and clarity. According to the Alexan- drian pattern standard, each pattern comprises a problem, body (background, motivation, variations), context, example, solution, diagram, and related patterns. A variation of Alexandrian pattern standard was later utilized by Gamma et al. (1995) and Coplien et al. (1995) to document soware design patterns. Later, J. O. Borchers (2000), Coram et al. (1996), Duyne et al. (2002), Tidwell (2010), Van Duyne et al. (2007), and Van Welie and Trætteberg (2000) contributed UI design patterns and pattern standards in the context of Human-Computer Interaction (HCI). Despite the varying naming conventions, these pattern standards captured similar information regarding each pattern.

Pattern Collections

A set of patterns organized into different categories is known as a pattern repository or pattern collection. Various representations of pattern collections are books (Crumlish et al.,

89 90 ui design pattern-driven rapid prototyping

2009; Gremillion et al., 2016; Neil, 2014; Tidwell, 2010), web collections (BBC, 2015; UI-Garage, 2016; Irons, 2003; Laakso, 2003; Lim, 2019; Mobiscroll, 2018; Nicely-Done, 2018; Outsystems, 2017; Pttrns-LLC, 2012; Richard et al., 2011; Sheibley, 2013; Tidwell, 2010; Toxboe, 2007; Van Welie and Trætteberg, 2000; ZURB, 2017) and web kits (Lab, 2018; Shopify Polaris).

Pattern Usage & Benefits

UI design patterns are considered the lingua franca among design teams. They enable multidisciplinary design teams to communicate knowledge in an easily understandable and consistent manner (Erickson, 2000). Similarly, in participatory design, UI design patterns are found easy to understand and useful during the early stages of design by novice designers (Finlay et al., 2002). UI design patterns are also considered a beneficial resource in sharing and discussing design and HCI knowledge to soware engineering students (J. Borchers, 2002; Ahmed Seffah, 2003). Based on these promising findings, Javahery and Ahmed Seffah (2002) introduced Pattern- Oriented Design (POD), which proposes a step by step design guide with suitable pattern suggestions to create UI designs. POD uses UI design patterns to assist designers and developers in building designs faster (Seffah et al., 2002). Usability Pattern-Assisted Design Environment (UPADE) (Seffah et al., 2002), UI Design Patterns in iOS Development (UXPin, 2015; Wetchakorn et al., 2015), Kony Visualizer (Inc, 2018), Silk UI (Outsystems, 2017), Damask (J. Lin and Landay, 2008), and Proto.io (Wesson et al., 2017) are a few projects based on the concept of utilizing UI design patterns for creating designs. A few studies regarding using UI design patterns during the design process suggested that UI design patterns are considered very useful during the design conception phase to discuss multiple design ideas and redesign existing designs (Bernhaupt et al., 2009; Javahery, Sinnig, et al., 2006). Besides their various acknowledged advantages, the value of UI design patterns is not recognized in the industry (Ahmed Seffah, 2015a). Therefore, in addition to discovering new design patterns, it is also vital to develop more techniques in improving the use of existing patterns and, resultantly, to integrate patterns into developers’ and designers’ daily practices (Ahmed Seffah, 2015a). 10.2 proposed approach 91

10.2 proposed approach

With UI design pattern-driven rapid prototyping approach, we introduce the idea of utilizing UI design patterns as prebuilt sample solutions to create soware prototypes rapidly. This approach is inspired by the Pattern-Oriented Design (POD) proposed by Javahery and Ahmed Seffah (2002). As mentioned previously, UI design patterns describe repetitive UI problems and offer suitable solutions to them. We believe that structuring UI design knowledge with respect to UI design problems will make it easy and accessible for lean UX designers and developers to locate the desired UI design knowledge. Utilizing the prebuilt UI design solutions provided with each UI design pattern will help in compensating the time constraints. Unifying UI design knowledge from several collections of UI design patterns and UI guidelines will provide a consolidated repository to address the scattered UI design knowledge problem. Lastly, providing layout blueprints with each UI design pattern will bridge the design communication gap between lean UX designers and front-end developers.

10.3 summary

We proposed a UI design pattern-driven approach to address the difficulties faced by lean UX designers during rapid prototyping. To realize this approach, we aim to document UI design knowledge as UI design patterns and consolidating them as a unified and rapidly accessible UI design pattern library.

11 KIWI:UIDESIGNPATTERNS& GUIDELINESLIBRARY

To realize the UI design pattern-driven rapid prototyping approach, we introduce Kiwi1,2, a UI design pattern library for smartphone applications (Chi Tran, 2019; Kipi, 2019; Sarah Suleri, Kipi, et al., 2019). Kiwi is a web-based library to ensure rapid accessibility and platform independence (Figure 11.1).

Figure 11.1: Kiwi, a web-based UI design patterns and guidelines library

Kiwi is implemented based on Gaffar’s 7C framework (Gaffar et al., 2003) for pattern management system:

1. Collect: Collecting UI design patterns and guidelines from various academic and commercial sources

2. Cleanup: Documenting UI design patterns and guidelines in a standard format

3. Control: Visualizing each UI design pattern and guideline using sample GUIs and layout blueprints

4. Certify: Validating UI design patterns

1 https://designwithkiwi.com/ 2 https://github.com/sarahsuleri/kiwi

93 94 kiwi: ui design patterns & guidelines library

5. Connect: Connecting patterns in relationships

6. Categorize: Grouping patterns and guidelines into categories and application types

7. Contribute: Offering a unified open-source library of UI design patterns and guide- lines

11.1 collecting patterns & guidelines

Kiwi consolidates 108 UI design patterns from numerous existing sources, such as pattern languages in websites (Tidwell, 2010; Van Welie and Trætteberg, 2000; Van Welie, Van Der Veer, et al., 2001), pattern collections in books (Crumlish et al., 2009; Duyne et al., 2002; Neil, 2014; Tidwell, 2010; Van Duyne et al., 2007), UXPin e-books (UXPin 2019), and other pattern collections on the web (Mobiscroll, 2018; Outsystems, 2017; Pttrns-LLC, 2012; Sheibley, 2013; Toxboe, 2007). Additionally, Kiwi also contains 596 UI guidelines for smart phone applications from UI design books (Ballard, 2007; Lacey, 2018; J. Nielsen and Budiu, 2012; Donald A. Norman, 2002; Shneiderman, 1998; Soegaard, 2018; Weiss, 2002), research papers (Halpert, 2005; Johnson, 2015; Luchini et al., 2004; Nilsson, 2009), and web resources (Apple, 2018; Google, 2020; Microso, 2018a; J. Nielsen, 1995). Guidelines aimed for web content, such as (Caldwell, Reid, et al., 2018; Jacobs et al., 2002; Miniukovich et al., 2017; Rabin et al., 2008), are also adapted for smartphone applications.

11.2 documentating patterns & guidelines

Kiwi documents UI design pattern according to the pattern standard proposed by Ahmed Seffah (2015b), with a few modifications. Each pattern has a name, alias, problem statement, context, solution, rationale, related patterns, references, tags, category, and application types. Additionally, each pattern contains a downloadable sample GUI and layout blueprint to provide supplementary visual information (Figure 11.2). The sample GUI and layout blueprint can be downloaded as an SVG file and modified in a prototyping tool that supports the SVG format. 11.2 documentating patterns & guidelines 95

Product Catalog What Inform users about all the products. When E-commerce applications should start from a product catalog. How Provide overview of all products, each with its own different attributes. Why A quick overview of all products helps the users in making a purchasing decision.

Figure 11.2: Pattern description with sample GUI and layout blueprint of Product Catalog pattern

Kiwi documents UI design guidelines according to the format proposed by Cronholm (2009). Each UI design guideline includes an instructional tip to describe what to do and how to do it, a rationale to explain the relevance, and a category to group similar guidelines (Figure 11.3).

Figure 11.3: Documenting UI design guidelines in a standard format 96 kiwi: ui design patterns & guidelines library

11.3 validating patterns

We validated the collected UI design patterns using the criteria proposed by Alexander (1977), Gaffar et al. (2003), Kahn et al. (2010), and Winn et al. (2002). The validation criteria determine whether a UI design pattern is adequately described, and it resolves the problem statement. If a pattern fulfills the criteria, we considered it eligible to be included in Kiwi.

11.4 connecting patterns & guidelines

We connected various UI design patterns using pattern relationship types proposed by Taleb et al. (2006), namely: similar, competitor, super-ordinate, subordinate, neighboring. We further introduced the sequential relationship type to depict the relationship between patterns that logically follow each other.

Application Types

E-commerce Food Music Social Media

Pattern Categories in E-Commerce Domain

Shopping Getting Input Feedback Data

UI Design Patterns in Shopping Category

Product Catalog Product Page Shopping Cart Check Out

UI Components in Product Catalog Pattern

App Bar: Top Card Button Chip

Page title Aachen Mon 24. Dec 20:03 Mosty cloudy SAVE FOR LATER Buy Now Free Parking Internet/Wifi TV

12°/5° Breakfast Pets friendly

Guidelines for App Bar: Top Component

Visibility Aesthetics Consistency Natural Mapping

Place most-used Scrolling upward Don't place imagery Place navigation actions on the left, hides the top app in a bar that makes (menu, up/back) on progressing towards bar. the top app bar text the far left the least-used and icons illegible. actions on the far right.

Figure 11.4: Kiwi Structure 11.5 categorizing patterns & guidelines 97

For each UI design pattern, we outlined a list of constituent UI components. Each UI component has a set of related UI design guidelines. Figure 11.4 illustrates the knowledge architecture of Kiwi and shows between UI design patterns and the corresponding UI design guidelines. This approach supports locating UI design guidelines based on the related UI components. Besides, it allows showcasing the application of UI design patterns for different platforms and design languages.

11.5 categorizing patterns & guidelines

The UI design patterns in Kiwi are grouped into 15 categories, namely: Authentication and Privacy, Dealing with data, Feedback, Getting input, Guidance, Interactions, Layout, Menus, Navigation, Organizing the content, Shopping, Social, Anti-Patterns, Dark Patterns, and Others. The objective behind this categorization is to organize UI design patterns according to their similar problem statements they address or the context they are applicable in. The categories are generated from existing sources of pattern collections (Toxboe, 2007; Van Welie, Van Der Veer, et al., 2001).

Figure 11.5: HiFi: Pattern Overview

We categorized the UI design guidelines based on the ten usability heuristics by J. Nielsen (1995) and design principles by Donald A. Norman (2002). A combined and refined set of categories corresponding to these usability principles are Visibility, Natural Mapping, User Control and Freedom, Consistency, Error Prevention, Recognition, Flexibility, Aesthetics, Recovery, Help and Documentation, Feedback, Constraints, Affordance. 98 kiwi: ui design patterns & guidelines library

11.6 application types

We further arranged UI design patterns according to seven different application types: Booking, E-commerce, Food, Music, News, Photos, and Social Media. The primary concept behind this arrangement was to show how various UI design patterns can be utilized in a flow to develop a specific smartphone application. We believe that arranging UI design patterns in terms of application types will help lean UX designers create a deeper understanding of how multiple patterns work together.

Figure 11.6: HiFi: Application Type Overview

11.7 summary

Kiwi is a web-based UI design patterns and guidelines library, currently scoped for smart- phone applications. The main idea behind Kiwi is to provide designers quick access to UI design knowledge in a unified repository. Therefore, we collected 108 UI design patterns and 596 UI design guidelines from various sources and documented them in a standard format. Each pattern and guideline in Kiwi is further visualized using a sample GUI and a layout blueprint. For rapid access, we grouped collected patterns in 15 different pattern categories and guidelines into 13 design categories. To enhance pattern utilization, we additionally constructed seven different application types by arranging different UI design patterns in a flow. 12 USABILITYEVALUATIONOFKIWI

Aer the implementation of Kiwi, we proceeded with conducting its usability evaluation using the System Usability Scale (SUS) (Brooke, 1996). This study aimed to quantitatively evaluate the usability of the features, interactions, navigation, content, and UI design of Kiwi (Kipi, 2019).

12.1 study design

We used purposive and snowball sampling to recruit 21 participants (6 UX designers, 7 Product Designers, 6 Front-end Developers, 2 UX Researchers). Our recruitment criteria for participation in this study required participants to have at least one year of prior UI prototyping experience. All the 21 participants (F=12, M=9) were 27.13 ± 3.51 (23 - 35) years old and had 2.45 ± 1.21 (1 - 5) years of prior prototyping experience (Figure 12.1). These participants had not previously participated in the formative study. Participants were compensated for their participation.

UX Researchers UX 4-5 years 1-2 years Designers 14.3% 19.0%

Front-end Developers

3-4 years 28.6% Product 2-3 years Designers 38.1%

(a) Occupation (b) Prior Experience

Figure 12.1: Demographics and prior experience of participants of the usability study of Kiwi.

99 100 usability evaluation of kiwi

The study took place in a quiet room. Each participant was provided with a laptop, a stylus and a mouse to create their prototypes using Kiwi. The study was conducted by one primary facilitator and one secondary facilitator (note-taker). As part of the usability evaluation, participants were asked to create the UI design and behavior of a shopping application for Android as a digital prototype using Kiwi. We began the study by introducing the purpose of the study and requested the partici- pants to provide informed consent and demographic information. We then introduced the participants to the task. Participants created their prototypes in a lab setup and provided feedback using a think-aloud protocol. Once the participants had finished their task, they were asked to fill out the SUS questionnaire. Each evaluation took ~45 minutes.

12.2 results

Kiwi scored an average of 77.6 points out of 100, which denotes an above average level of usability. Overall, Kiwi was perceived as an easy to use and beneficial resource to assist lean UX designers during their rapid prototyping process. Figure 12.2 shows the mean responses to each part of the SUS questionnaire.

Support

Knowledge

Strongly Strongly Disgree Agree

Figure 12.2: SUS mean responses for Kiwi

We further analyzed the responses to each part of the questionnaire. 12.2 results 101

Frequency of Use

Since most participants were lean UX and product designers, they indicated a willingness to use Kiwi frequently during rapid prototyping. Seven participants (33.3%) strongly agreed, and six participants (28.6%) agreed that they would like to use this system frequently (Figure 12.3). However, Six participants (28.6%) were neutral and the remaining two (9.5%) disagreed with the statement.

Figure 12.3: Participants’ preference for frequency of using Kiwi according to SUS.

Complexity

Sixteen participants (76.2%) did not consider the design, interactions, navigation, and content of Kiwi unnecessarily complicated. Five participants (23.8%) strongly disagreed, and eleven participants (52.4%) disagreed that they found the system unnecessarily complex (Figure 12.4). However, three participants (14.3%) were neutral and two participants (9.5%) agreed with the statement.

Figure 12.4: Participants’ perception of complexity of Kiwi according to SUS. 102 usability evaluation of kiwi

Ease of Use

A majority of the participants (71.4%) considered Kiwi easy to use. Seven participants (33.3%) strongly agreed and eight participants (38.1%) agreed that the system was easy to use (Figure 12.5). Four participants (19%) were neutral and the remaining two participants (9.5%) disagreed with the statement.

Figure 12.5: Participants’ perception of ease of using Kiwi according to SUS.

Need of Technical Support

Almost all participants (80.9%) were confident that they would not need any technical assis- tance while using Kiwi. Twelve participants (57.1%) strongly disagreed, and five participants (23.8%) disagreed that they would need the support of a technical person to be able to use this system. However, two participant (9.5%) were neutral and two participants (9.5%) agreed with the statement (Figure 12.6).

Figure 12.6: Participants’ perception of need of any technical support while using Kiwi according to SUS. 12.2 results 103

Integrity

There were no negative remarks regarding the integrity of the library. Nine participants (42.9%) strongly agreed, and nine participants (42.9%) agreed that the various features in Kiwi were well integrated. However, three participants (14.3%) were neutral regarding this statement (Figure 12.7).

Figure 12.7: Participants’ perception of how well integrated Kiwi is according to SUS.

Inconsistency

A majority of participants (76.2%) disagreed that there was too much inconsistency in the system. Nine participants (42.9%) strongly disagreed, and seven participants (33.3%) disagreed with the statement. However, three participants (14.3%) were neutral, one partic- ipant (4.8%) strongly agreed and one participant (4.8%) agreed with the statement (Figure 12.8).

Figure 12.8: Participants’ perception of design inconsistencies in Kiwi according to SUS. 104 usability evaluation of kiwi

Ease of Learning

All of the participants agreed that most people would learn to use this system very quickly. Ten participants (47.6%) strongly agreed, and six participants (28.6%) agreed with the state- ment (Figure 12.9). However, two participants (9.5%) disagreed and the remaining three participants (14.3%) were neutral.

Figure 12.9: Participants’ perception of ease of learning Kiwi according to SUS.

Difficulty of Use

A majority of the responses (76.2%) reflect that participants did not find Kiwi very cum- bersome to use. Ten participants (47.6%) strongly disagreed, and six participants (28.6%) disagreed with the statement (Figure 12.10). However, four participants (19%) were neutral and one participant (4.8%) found the library cumbersome to use.

Figure 12.10: Participants’ perception of difficulty of using Kiwi according to SUS. 12.2 results 105

Confidence in Use

Almost all the participants (76.2%) felt confident in using Kiwi for rapid prototyping. Nine participants (42.9%) strongly agreed and seven participants (33.3% ) agreed with the state- ment (Figure 12.11). However, two participants (9.5%) disagreed and three participants (14.3%) were neutral regarding their confidence in using Kiwi.

Figure 12.11: Participants’ perception of their confidence in using Kiwi according to SUS.

Need of Prior Knowledge & Experience

Since our participants had prior experience in rapid prototyping, most of them (80.9%) indicated that they did not need to learn a lot of things before they could get going with the system. Thirteen participants (61.9%) strongly disagreed, and four participants (19%) disagreed with the statement (Figure 12.12). However, three participants (14.3%) remained neutral and one participant (4.8%) agreed with the statement.

Figure 12.12: Participants’ perception of the need of prior knowledge and expertise in using Kiwi according to SUS. 106 usability evaluation of kiwi

12.3 summary

Kiwi scored 77.6 on SUS which indicates overall good usability. Participants found Kiwi easy to use, and they showed an inclination towards using Kiwi frequently to create rapid prototypes. Since all the participants had prior knowledge and experience regarding rapid prototyping, most of them thought they did not require to learn a lot before using Kiwi. They found various features of Kiwi well-integrated and did not find the library unnecessarily complex. They felt confident in using Kiwi, and therefore, they did not require any technical support while using Kiwi. 13 WORKLOADEVALUATIONOFKIWI

We furthered our research by investigating rapid prototyping from the perspective of sub- jective workload (Sarah Suleri, Hajimiri, et al., 2020). Here, workload refers to the perceived level of physical and cognitive burden experienced by the lean UX designers during rapid prototyping (Gore, 2010). Similar to our approach with traditional UI prototyping, we fol- lowed the subjective numerical measurement technique using NASA Task Load Index (NASA-TLX) (Gore, 2010; S. G. Hart et al., 1988) to evaluate the subjective workload experi- enced during rapid prototyping.

13.1 rationale for study

This study aims to quantitatively measure and compare the subjective workload experienced by lean UX designers following the traditional approach versus the UI design pattern-driven approach for rapid prototyping. Here, the term traditional approach denotes the usual rapid prototyping tools and techniques used by lean UX designers in practice. Whereas, the UI design pattern-driven approach denotes utilizing UI design patterns provided by Kiwi during rapid prototyping. To summarize, this study shall (i) help understand the subjective workload experienced by lean UX designers during rapid prototyping and (ii) evaluates the impact of using the UI design pattern-driven approach (Kiwi) on the workload of rapid prototyping.

13.2 null hypothesis

We formulated our null hypothesis structured in terms of the subjective workload experi- enced by lean UX designers during rapid prototyping.

H0 There is no difference between the subjective workload of rapid prototyping using the traditional approach and UI design pattern-driven approach (Kiwi).

107 108 workload evaluation of kiwi

The study designed to test our hypothesis is explained in the following sections.

13.3 participants

In total, 32 participants (16 male, 16 female) took part in the workload study. Participants were 27 ± 3.13 (20 - 34) years old, had 2.45 ± 1.06 (1 - 4) years of prior rapid prototyping experience, and were a mix of 12 UX designers (37.5%), 10 product designers (31.25%), and 10 HCI grad students (31.25%) (Table 13.1, Figure 13.1). We ensured that none of these participants were previously a part of the Usability Study. As a prerequisite, we made sure that none of the participants had used Kiwi for rapid prototyping previously. All participants were compensated for their participation.

Students UX 3-4 years Designers 31.3% 1-2 years 37.5%

Product Designers 2-3 years 31.3%

(a) Occupation (b) Prior Experience

Figure 13.1: Demographics and prior experience of participants of the workload study on rapid prototyping.

13.4 study design

The study consisted of two groups of participants (Experimental, Control). Participants were randomly assigned to the experimental or control group with the constraint that each group had an equal distribution of participants based on their gender and prior prototyping experience. Therefore, we carefully selected 32 participants to ensure the equal distribution of gender and prior experience in both groups (Figure 13.2). The experiment 13.4 study design 109

3-4 years 3-4 years 31.3% 1-2 years 31.3% 37.5% 1-2 years 43.8%

2-3 years 2-3 years 31.3% 25.0%

(a) Experimental (b) Control

Figure 13.2: Distribution of participants based on their prior experience for the workload study on rapid prototyping. group contained ne=16 participants (F=8, M=8) with 2.47 ± 1.12 (1 - 4) years of prior rapid prototyping experience. Similarly, the control group contained nc=16 participants (F=8, M=8) with 2.44 ± 1.03 (1 - 4) years of prior rapid prototyping experience. We compiled a list of eight distinct task categories: E-commerce, Booking, Food, Music, News, Photos, Social Media, Weather (Appendix b). Each task category consists of three features. Participants belonging to both groups were assigned a random task category. In total, participants had three hours to create rapid prototypes based on the assigned task category. They built their UI prototypes in a lab setup. During these three hours, if the participants had any questions regarding the study, they could ask the moderator to clarify. The experimental group had to use UI design pattern-driven approach to create their rapid prototypes. In contrast, the control group had the independence to follow the traditional approach of rapid prototyping. The control group was restricted from using UI design patterns during rapid prototyping. In the experimental group, we randomly asked half of the group to use the UI design pattern library documented using a pattern standard and the remaining half to use UI design pattern libraries documented without using a pattern standard. In both cases, the UI design decisions and prototyping tools were le solely up to the participants. 110 workload evaluation of kiwi

13.5 apparatus

The study was performed in a quiet room. Participants were provided with a table and a comfortable chair. Additionally, they were provided with a laptop, stylus and a mouse to create their prototypes. Prior to the study, we had already installed all the commonly used UI prototyping tools (Table 3.1, 3.2, 3.3) in the laptop. Participants were informed regarding the installed prototyping tools. They were also given the freedom to install any new tools, if need be. No participants installed a new tool, however, a few participants used different web applications during their task.

13.6 measurements

Between the experimental and control group, we analyzed the variations in physical de- mand, mental demand, temporal demand, performance, effort, frustration, and the overall subjective workload experienced during rapid UI prototyping using the NASA-TLX ques- tionnaire (Pandian and Sarah. Suleri, 2020).

13.7 task categories

Participants from both experimental and control group were randomly assigned one of the eight task categories to prototype an Android smartphone application: shopping, booking, food, music, news, photos, social media, and weather (Appendix b).

13.8 procedure

Aer a brief introduction to the workload study, participants were asked to provide informed consent and demographic information. Before we introduced the participants with the assigned task categories, they were asked to complete a reference task such as a mental calculation, and then assess its workload using the NASA-TLX questionnaire. Reference tasks help decrease the between-groups variability by better calibrating participants with the 13.8 procedure 111

Experience Group Participant Age Gender Profession Application Type (years) CP1 28 Male 3 UX designer Social Media CP2 24 Male 2 HCI Grad Student News CP3 26 Female 2 HCI Grad Student Booking CP4 24 Female 3 UX designer Weather CP5 25 Female 1 UX designer News CP6 34 Female 4 UX professional Food CP7 23 Male 1.5 HCI Grad Student Social Media CP8 29 Female 3 UX professional Photos CP9 25 Female 4 UX designer Music CP10 27 Male 1.5 HCI Grad Student Music

Control (traditional) CP11 23 Male 1 HCI Grad Student Weather CP12 30 Male 3.5 UX designer Photos CP13 28 Male 2.5 UX professional Food CP14 30 Female 3 UX professional E-commerce CP15 25 Male 1 UX designer Booking CP16 27 Female 3 UX professional E-commerce E1P1 24 Female 2 HCI Grad Student Weather E1P2 27 Female 3 UX designer Food E1P3 32 Female 4 UX professional News E1P4 26 Male 1 UX designer Music E1P5 27 Female 3.5 UX designer Social Media

Library with E1P6 24 Male 1.5 HCI Grad Student Photos pattern standard E1P7 30 Male 3 UX professional Booking E1P8 22 Male 1.5 HCI Grad Student E-commerce E2P1 28 Male 2.5 UX professional Music E2P2 31 Female 1 UX designer Booking E2P3 30 Female 3 UX designer Photos E2P4 31 Male 4 UX designer E-commerce Experimental (UI design patterns) E2P5 28 Male 3 UX professional Weather E2P6 20 Male 1 HCI Grad Student Food pattern standard Libraries without E2P7 32 Female 4 UX professional Social Media E2P8 24 Female 1.5 HCI Grad Student News

Table 13.1: Participant details and assigned application types for rapid prototyping for workload analysis various dimensions of NASA-TLX (Gore, 2010; S. G. Hart et al., 1988). We used an NASA-TLX web application (Pandian and Sarah. Suleri, 2020) to capture the participant’s workload. Before starting each task, we gave a verbal description of the assigned task category and its features. In addition, the participants belonging to the experimental group were also given a thorough introduction to UI design pattern-driven approach to rapid prototyping. Participants assigned to the UI design pattern library with a pattern standard were introduced to Kiwi (Sarah Suleri, Kipi, et al., 2019). On the other hand, the participants assigned to the UI design libraries without a pattern standard were introduced to five web-based pattern libraries that document UI design patterns without a pattern standard: Pattern Tap (ZURB, 2017), Pttrns (Pttrns-LLC, 2012), UI Garage (UI-Garage, 2016), Mobile Patterns (Sheibley, 112 workload evaluation of kiwi

2013), and Nicely done (Nicely-Done, 2018). These participants could use any of the five libraries for rapid prototyping. Participants were given time to get acquainted with the assigned UI design pattern li- braries and try them out in advance. We answered any questions asked by the participants. Once the participants were comfortable with using the libraries, they proceeded with creating their rapid prototypes according to the assigned task category. The participants belonging to the control group were restricted from using the UI design pattern-driven approach to create rapid prototypes according to the assigned task category. They could use any existing UI prototyping tools and techniques to create their rapid UI prototypes. Once the participants were clear about the instructions, they were given three hours to create a rapid prototype according to the task category and UI prototyping approach assigned to them. While the participants were performing their task, the moderator made notes based on the qualitative comments and observations. As soon as the participants were finished with the task, they were invited for follow- up interviews in a semi-structure format (~10 min) to share their qualitative feedback and to complete the NASA-TLX questionnaire regarding their overall experience during the task. One primary interviewer and one secondary interviewer (note-taker) conducted these interviews. The interviews were audio recorded and later, transcribed. During these interviews, we also collected the UI prototypes created by the participants.

13.9 analysis

We collected data from two independent groups of participants (Experimental, Control) using the NASA-TLX questionnaire. NASA-TLX uses an ordinal scale to capture subjective values from participants. The variance of data collected using the UI design pattern-driven approach and traditional approach is non-homogeneous. Therefore, to test for significant differences in the individual dimensions, we used the Mann Whitney U two tailed test (Mann et al., 1947) (level of significance < 0.05) for paired samples. We also collected qualitative feedback from participants during follow-up interviews. To analyze this data, we followed the inductive analysis approach of the Grounded Theory methodology (Strauss et al., 1994). Using the open-coding approach, we developed a coding scheme based on our initial observations. Two coders independently coded four transcripts (two transcript per group) to refine the coding scheme. For further discussion, we used 13.10 results & discussion 113 the affinity mapping technique to arrange the coding themes. Next, we iteratively checked another two transcripts individually. Aer a few iterations, both coders reached a substantial level of agreement (Cohen’s kappa, κ=0.71). In the following sub-sections, we provide an analysis of the collected data, structured in terms of subjective workload, physical demand, mental demand, temporal demand, performance, effort, and frustration.

13.10 results & discussion

Overall, the average subjective workload experienced by participants using the UI design pattern-driven approach (M=41.75,SD=14.11) was significantly less than the average subjec- tive workload experienced using traditional approach (M=57.58,SD=12.66) (Figure 13.3, 13.4 & Table 13.2). This difference was statistically significant (U=45.00, nc=ne=16, p=0.001) and hence, H0 is rejected.

80

70

60

50

40

30

20

Traditional Approach UI Design Pattern-driven Approach

Figure 13.3: Comparison of average workload experienced by participants of workload study during rapid prototyping using traditional versus UI design pattern-driven approach. 114 workload evaluation of kiwi atr-rvnMa 93 .033.75 1.00 19.38 Mean Pattern-driven Traditional prahS 46 .969.94 1.59 24.62 SD approach approach

Experimental (UI design patterns) Control (Traditional)

Libraries without Pattern Standard Library with Pattern Standard 246.040 240.00 4.00 60.00 E2P4 SD 150.00 2.00 75.00 CP13 60.00 1.00 60.00 CP6 2800 .00.00 0.00 0.00 0.00 0.00 E2P8 0.00 0.00 E2P6 0.00 0.00 E2P3 0.00 0.00 10.00 CP16 120.00 2.00 60.00 CP11 280.00 4.00 70.00 CP2 SD 221.000 0.00 0.00 10.00 E2P2 0.00 0.00 0.00 E1P5 40.00 1.00 40.00 CP10 274.040 160.00 SD 4.00 20.00 Mean 40.00 1.00 E2P7 20.00 0.00 E2P5 0.00 0.00 60.00 0.00 E2P1 1.00 80.00 0.00 0.00 Mean 0.00 4.00 0.00 E1P8 40.00 20.00 E1P7 0.00 0.00 E1P6 0.00 2.00 0.00 0.00 20.00 E1P4 0.00 10.00 E1P3 70.00 E1P2 E1P1 160.00 0.00 2.00 0.00 80.00 130.00 20.00 CP15 2.00 CP14 75.00 65.00 50.00 CP12 1.00 0.00 1.00 75.00 0.00 50.00 150.00 CP9 60.00 300.00 CP8 3.00 170.00 CP7 4.00 50.00 2.00 75.00 150.00 CP5 85.00 CP4 3.00 CP3 50.00 CP1 ID Mean cl egtAdjusted Weight Scale 08 .991.13 1.29 20.89 39 .629.76 1.46 23.90 61 .193.77 1.81 26.15 37 .252.50 1.12 23.75 50 .815.00 0.88 15.00 10 5 (500) (5) (100) 78 .5114.69 1.75 57.81 hsclDemand Physical Rating Table 0.050 500.00 5.00 100.00 cl egtAdjusted Weight Scale 25 .1160.62 2.81 52.50 62 .2151.25 2.62 56.25 38 .6131.85 1.36 23.80 60 .1185.32 1.41 36.03 02 .7161.72 1.47 30.22 00 .0450.00 5.00 90.00 450.00 5.00 90.00 50 .0146.62 1.60 25.04 50 .0195.00 3.00 65.00 00 .0400.00 5.00 80.00 240.00 4.00 60.00 187.81 2.88 60.00 240.00 4.00 240.00 60.00 4.00 60.00 160.00 2.00 80.00 0.00 0.00 60.00 00 .060.00 3.00 60.00 20.00 3.00 20.00 40.00 2.00 20.00 40.00 2.00 20.00 00 .050.00 1.00 50.00 50.00 1.00 50.00 50 .0340.00 4.00 85.00 00 .060.00 90.00 2.00 3.00 30.00 30.00 00 .070.00 210.00 1.00 3.00 70.00 70.00 0.00 140.00 0.00 2.00 70.00 70.00 210.00 3.00 70.00 210.00 3.00 70.00 140.00 2.00 70.00 280.00 4.00 70.00 50 .0300.00 4.00 75.00 300.00 4.00 75.00 87 .0170.00 3.00 48.75 00 .030.00 3.00 10.00 20.00 2.00 10.00 10 5 (500) (5) (100) .020 0.00 2.00 0.00 13.2: etlDemand Mental aacletduigNS-L o ai prototyping. rapid for NASA-TLX using collected Data Rating 0.030 300.00 3.00 100.00 500.00 5.00 100.00 cl egtAdjusted Weight Scale 62 .1123.62 1.71 26.20 28 .5154.91 1.85 32.84 25 .2140.62 2.12 62.50 50 .0100.00 4.00 25.00 50 .0130.00 2.00 65.00 130.00 2.00 0.00 65.00 0.00 65.00 50 .00.00 0.00 55.00 00 .0120.00 2.00 60.00 400.00 5.00 80.00 80.00 1.00 80.00 00 .080.00 4.00 40.00 40.00 20.00 2.00 2.00 80.00 20.00 20.00 4.00 20.00 40.00 2.00 20.00 00 .050.00 1.00 50.00 250.00 5.00 50.00 150.00 3.00 50.00 100.00 0.00 2.00 0.00 0.00 50.00 50.00 0.00 50.00 00 .040.00 1.00 40.00 50 .0255.00 3.00 85.00 00 .00.00 0.00 30.00 120.00 4.00 30.00 06 .0118.12 2.50 40.62 00 .0210.00 3.00 70.00 350.00 5.00 70.00 350.00 5.00 70.00 68 .976.49 1.69 16.85 75.00 1.00 150.00 75.00 2.00 75.00 37 .082.50 2.50 33.75 00 .00.00 0.00 10.00 75 .0153.75 2.50 47.50 19 .5143.54 1.75 21.91 10 5 (500) (5) (100) .010 0.00 1.00 0.00 eprlDemand Temporal Rating cl egtAdjusted Weight Scale 25 .561.25 3.25 22.50 38 .475.00 1.34 23.80 50 .075.00 3.00 25.00 100.00 4.00 25.00 75.00 3.00 25.00 00 .0270.00 3.00 90.00 69 .666.43 1.06 26.96 00 .0120.00 2.00 60.00 00 .060.00 3.00 20.00 20.00 1.00 20.00 80.00 4.00 20.00 00 .050.00 1.00 50.00 250.00 5.00 50.00 50.00 1.00 50.00 00 .080.00 2.00 40.00 72.19 3.00 25.62 120.00 3.00 40.00 00 .00.00 0.00 30.00 150.00 5.00 30.00 150.00 5.00 30.00 50 .070.00 2.00 35.00 12 .271.25 2.62 31.25 27 .173.26 1.51 22.72 00 .0210.00 3.00 70.00 00 .040.00 30.00 4.00 40.00 3.00 10.00 4.00 10.00 10.00 40.00 4.00 10.00 30.00 3.00 10.00 10 5 (500) (5) (100) 37 .851.25 3.88 13.75 76 .686.10 1.36 17.68 .040 20.00 4.00 5.00 5.00 1.00 5.00 .020 0.00 2.00 0.00 0.00 4.00 0.00 0.00 5.00 0.00 0.00 0.00 4.00 5.00 0.00 0.00 0.00 5.00 0.00 0.00 2.00 0.00 Performance Rating cl egtAdjusted Weight Scale 25 .2113.75 2.62 42.50 30 .387.73 1.13 23.09 50 .0260.00 4.00 65.00 50 .0110.00 2.00 55.00 165.00 3.00 55.00 00 .0180.00 3.00 60.00 120.00 2.00 60.00 60.00 1.00 60.00 240.00 400.00 4.00 5.00 60.00 80.00 80.00 1.00 80.00 00 .080.00 4.00 20.00 20.00 1.00 20.00 00 .0160.00 4.00 120.00 40.00 3.00 40.00 160.00 4.00 40.00 50 .0380.00 4.00 95.00 00 .0120.00 4.00 90.00 30.00 3.00 60.00 30.00 2.00 30.00 50 .5101.88 2.75 35.00 49 .598.71 1.25 24.93 00 .0280.00 4.00 70.00 280.00 4.00 70.00 280.00 280.00 4.00 4.00 70.00 70.00 350.00 5.00 280.00 70.00 4.00 70.00 26 .1100.51 1.21 12.63 93 .0242.81 3.50 69.38 50 .0225.00 375.00 3.00 5.00 75.00 75.00 150.00 2.00 75.00 150.00 2.00 75.00 75 .890.00 2.88 27.50 00 .030.00 3.00 10.00 10.00 1.00 10.00 20.00 2.00 10.00 98 .680.17 1.06 19.82 10 5 (500) (5) (100) .030 0.00 3.00 0.00 Effort Rating cl egtAdjusted Weight Scale 33 .8108.50 1.48 23.34 22 .3215.74 2.23 42.24 50 .00.00 0.00 25.00 25.00 1.00 25.00 46 .9172.26 1.89 34.68 00 .0450.00 5.00 90.00 450.00 450.00 5.00 5.00 90.00 90.00 50 .065.00 1.00 65.00 50 .0180.00 4.00 45.00 00 .0300.00 5.00 60.00 240.00 3.00 120.00 80.00 2.00 60.00 00 .020.00 1.00 20.00 0.00 0.00 20.00 00 .0150.00 0.00 3.00 0.00 50.00 50.00 50.00 1.00 50.00 00 .040.00 1.00 40.00 50 .0285.00 3.00 95.00 00 .0120.00 4.00 90.00 30.00 3.00 30.00 120.00 4.00 30.00 50 .0106.25 2.50 35.00 35.00 1.00 35.00 12 .8195.00 2.88 41.25 00 .0350.00 5.00 70.00 210.00 3.00 140.00 70.00 2.00 70.00 50 .0150.00 2.00 75.00 96 .5105.62 1.75 49.69 81 .9150.62 2.69 38.12 00 .00.00 0.00 30.00 10.00 20.00 3.00 2.00 10.00 10.00 10.00 1.00 10.00 10 5 (500) (5) (100) 77 .0111.99 1.60 27.77 .000 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 rsrto Level Frustration Rating Workload 82.00 22.00 52.00 36.33 26.00 46.33 44.08 33.33 55.33 55.33 38.00 40.33 65.00 35.33 50.00 32.67 62.67 42.67 70.00 68.67 46.67 58.67 44.67 54.67 75.33 12.66 38.67 65.67 55.67 60.67 60.67 20.67 35.67 10.59 57.58 39.42 41.75 (100) 57.67 17.38 14.11 13.10 results & discussion 115

Traditional UI design pattern Significance

Workload 57.58 41.75 U=45.00, p=0.001 Physical demand 114.69 33.75 U=56.50, p=0.003 Mental demand 187.81 160.62 U=111.50, p=0.273 Temporal demand 140.62 118.12 U=116.00, p=0.331 Performance 72.19 61.25 U=110.50, p=0.259 Effort 242.81 101.88 U=39.00, p=0.0004 Frustration 105.62 150.62 U=124.0, p=0.447

Table 13.3: Comparison of subjective workload, physical demand, mental demand, temporal de- mand, performance, effort and frustration of using UI design pattern-driven and tradi- tional approach to rapid prototyping.

The qualitative feedback from the participants of the experimental group revealed that two factors played a significant role in decreasing the overall workload of rapid prototyping.

1. All-in-one library: Participants from the experimental group reported that it was convenient for them to have all the information regarding UI design in one place. They also pointed out that Kiwi would be useful for them in improving their design knowledge and skills. (Experimental group, n=12,75%)

I usually never think of going to a pattern library to look for solutions. I expected them (UI design pattern libraries) to be very theorical, all textual information. But, I like that they (patterns) had less text, and more visual information. “ - E2P7

2. Reusing pre-built solutions: Participants from the experimental group reported that as they started creating their digital prototypes, they already had a starting point, and they did not need to start from a blank canvas. They could download the GUI sample solutions and customize them as per need (Experimental group, n=15, 93.75%)

I liked that I can download vector templates, and use them in different platforms. “ - E1P2 116 workload evaluation of kiwi

300 Prototyping Traditional Approach UI Design Pattern-driven Approach 250

200

150 Adjusted Rating 100

50

0 Physical Demand Mental Demand Temporal Demand Performance Effort Frustration Level Measurements

Figure 13.4: Comparison of physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload analysis during rapid prototyping using traditional approach versus UI design pattern-driven approach.

The average physical demand (U=56.50, nc=ne=16, p=0.003) and effort (U=39.00, nc=ne=16, p=0.0004) using UI design pattern driven approach was significantly less than the traditional approach (Figure 13.4). However, there is no significant difference between mental demand (U=111.50, nc=ne=16, p=0.273), temporal demand (U=116.00, nc=ne=16, p=0.331), performance (U=110.50, nc=ne=16, p=0.259), and frustration (U=124.0, nc=ne=16, p=0.447). The qualitative feedback revealed that participants using the traditional approach had to start the UI prototyping process from a blank canvas every time (n=12,75%). They also reported searching for inspiration and sample content before starting to design their rapid prototypes (n=11,68.75%); resulting in an increase in the physical demand and effort of rapid prototyping using the traditional approach. Whereas, reusing the ready-made sample solu- tions (GUIs) helped in decreasing the physical demand and effort participants experienced during rapid prototyping using the pattern-driven approach (n=14,87.5%).

The main problem of the task, which resulted in much work, was trying to find real news with good image and text to put them as a sample in the app. “ - C1P4 What I liked was that it (sample GUI) already had the text, so I didn’t have to spend time to find some random texts. And also, it is good to see a sample before prototyping, and it (pattern) shows samples. “ - E1P2 13.11 summary 117

Participants using the pattern-driven approach reported that they did not need to learn a lot before adopting this approach as it aligned which their traditional workflow of looking for inspiration before starting to design something (n=7,43.75%). Overall, newbie UI/UX de- signers expressed that they felt more confident in their designs following the pattern-driven approach (n=4,25%). However, there is no correlation (rs=-0.03) between the subjective workload experienced and the prior rapid prototyping experience of our participants. A common concern reported by participants using pattern-driven approach was regarding the findability of patterns in pattern libraries (n=5,31.25%). It was suggested to tag patterns with multiple relevant keywords to increase their discoverability, findability, and usability.

I knew what I was looking for, but I didn’t know the pattern name for it, it was irritating. “ - E2P6 However, there is no significant difference between the subjective workload (U=26.00, ne1=ne2=8, p=0.282), physical demand (U=27.00, ne1=ne2=8, p=0.283), mental demand (U=31.00, ne1=ne2=8, p=0.479), temporal demand (U=26.00, ne1=ne2=8, p=0.280), performance (U=21.50, ne1=ne2=8, p=0.143), effort (U=24.00, ne1=ne2=8, p=0.214) and frustration (U=29.50, ne1=ne2=8, p=0.416) experienced using pattern libraries with and without pattern standard documen- tation. However, the means are in the positive direction for pattern library using pattern standard.

13.11 summary

We investigated the impact of using UI design pattern-driven approach on the workload of rapid prototyping of smartphone applications. The study revealed that the subjective workload experienced by UX designers using a pattern-driven approach is significantly less than the workload experienced using traditional approach of rapid prototyping. Specif- ically, there is a significant decrease in physical demand and effort of rapid prototyping while using pattern-driven prototyping. However, there is no significant difference in sub- jective workload experienced using pattern libraries with and without pattern standard documentation for UI design patterns.

14 LIMITATIONS&FUTUREWORK

Our research aims to broaden the awareness and usage of UI design patterns. More specifi- cally, Kiwi aims to develop a deeper understanding of the purpose of each UI design pattern and how various UI design patterns can work together to assist UI designers and developers during the design and development process. Besides the promising results of usability and workload analysis described in the previous chapters, our research on adapting the UI design pattern-driven approach to rapid UI prototyping using Kiwi has some limitations that we aim to address in our future work. Currently, Kiwi contains over a hundred UI design patterns for smartphone applications. In the future, we aim to add more patterns into our library and further expand the library by integrating other platforms, e.g., web and smartwatch applications. A common concern reported by participants using the pattern-driven approach was regarding the findability of UI design patterns in the pattern libraries. In the future, we aim to tag patterns with multiple relevant keywords and, additionally, introduce text based search to increase the discoverability, findability, and usability of UI design patterns. With Kiwi, we organized multiple UI design patterns with respect to various application types. Currently, designers can download these patterns as SVG files and customize them in various prototyping tools. However, these SVG files do not contain any information regarding the ordering of patterns within an application type. In the future, we aim to expand Kiwi to contain information regarding the interconnection between various UI design patterns. Currently, Kiwi provides sample GUIs and layout blueprints for each UI design pattern. In the future, we aim to provide the front-end code for each sample GUI to assist the front- end development process of each UI design pattern. This expansion aims to support and encourage designers and developers to utilize UI design patterns in their everyday design tasks.

119

Part III Prototyping for Accessibility

Synopsis

Accessible UI prototyping involves creating UI designs that are perceivable, understandable, and operable by people with a broad range of abilities. As a result of our formative study, we identified that UI/UX designers face difficulty creating accessible UI designs due to the lack of visibility of end-users and limited usability of UI design guidelines for accessibility. Therefore, we proposed a persona-based approach to designing accessible UIs. Based on this approach, we developed Personify, a graph-based library that links UI design guidelines for accessibility with the respective personas. Personify aims to help UI/UX designers in empathizing with the target users by increasing their visibility using personas and by linking each persona with the respective guidelines. As per the usability evaluation using SUS, Personify scored an average of 76.4 points out of 100, which implies overall above average usability.

We further investigated the impact of using the persona-based approach offered by Personify on the workload of accessible UI prototyping. Our workload study using NASA-TLX revealed that the subjective workload experienced by UI/UX designers using a persona-based approach (Personify) is significantly less than the workload experienced using the traditional method of accessible UI prototyping. Specifically, there is a significant decrease in the mental demand and effort of UI/UX designers during UI prototyping while using the persona-based approach (Personify).

15 FORMATIVESTUDYONPROTOTYPING FORACCESSIBILITY

Aer reviewing existing academic and commercial tools, we conducted a formative user study with UI/UX designers (Shanmuga Sundaram, 2020). This study aimed to understand common practices, tools, and strategies UI/UX designers use during accessible UI prototyp- ing. To identify the pain points of UX designers during their workflow of designing accessible UIs, we conducted a survey with 30 UX designers, conducted follow-up semi-structured interviews (∼ 45min) with 21 UX designers, and analyzed 32 documents regarding user profiling and UI designs.

15.1 participants

For our formative study, we recruited 30 participants (F=17, M=13) using purposive and snow- ball sampling. Our recruitment criteria for participation in this study required participants to have at least one year of prior experience in UI design in multiple research or industry projects targeting users with accessibility issues. Participants were 28 ± 3.5 (23-38) years old, had 2.93 ± 1.13 (1 - 4) years of prior experience in UI prototyping. Our participants included 19 UX Designers and 11 Product Designers. They were compensated for their participation (Figure 15.1). Table 15.1 shows the details of the 21 designers we recruited for follow-up interviews and their prior experience with accessibility-related research.

3-4 years Product 20.0% Designers 1-2 years 35.0%

UX Designers

2-3 years 45.0%

(a) Occupation (b) Prior Experience

Figure 15.1: Demographics and prior experience of participants of the formative study on prototyp- ing for accessibility.

123 124 formative study on prototyping for accessibility

Accessibility ID Experience in Years Accessibility Domain ID Experience in Years Domain

P1 2.5 Motor disabilites Vision impairment, P2 3 Low vision Color Blindness, P3 2 Low vision P14 4 Motor Impairment, P4 4 Dyslexia Asthma, Partial, P5 3 Alzheimer Complete vision impairment P15 3 Near/far- sightedness P6 1 Dyslexia P16 3 Color Blindness P7 2 Dyslexia P17 4 One-handed- people P8 2.5 Complete vision impairment Blurring issues, P9 3 Color blindness P18 4 Color blindness P10 3 Dyslexia Color blindness, Green, P19 3 P11 2 Vision impairment Red color blindness Alzeimers, Deafness, P20 2 P12 1.5 Motor disabilites Vision impairment Asthma, Motion Sickness, P21 3.5 P13 2 Motion sickness Color Blindness

Table 15.1: Experience in years and accessibility domains of the participants of our follow-up interviews.

15.2 procedure

To identify the pain points of UX designers during their workflow of designing accessible UIs, we conducted a survey with 30 UX designers, conducted follow-up semi-structured interviews (~45 min) with 21 UX designers, and analyzed 32 documents regarding user profiling and UI designs. Participants were provided with an online survey with 20 questions regarding their background, their workflow of creating UI design, frustrations, the tools they use frequently, and the practices they follow during their UI design process for accessibility. Once we had accumulated the essential information via survey, we requested participants for a follow-up semi-structured in-depth interview. These interviews were regarding specific problems experienced during the accessible UI design process for their respective target user groups. 21 out of 30 UX designers agreed to participate in the follow-up interviews. During these interviews, we also gathered various documents regarding user profiling and UI designs to analyze the documentation conventions of various UX designers. 15.3 analysis 125

We conducted semi-structured interviews (~45 min) in the natural environment of the participants. Each interview was audio-recorded and later transcribed. Interviews were conducted by one primary interviewer and one secondary interviewer (note-taker). To begin with, we explained the purpose of the formative study to the participants. We requested them to provide: informed consent, demographics, prior prototyping experience, and willingness to participate in the workload study later. Once this information was collected, participants were asked open-ended questions regarding their common prototyping practices. For example: "During the process of acces- sible UI prototyping, what steps do you normally take?". They were also asked to focus on how they prototype in practice, rather than how it is known in theory—since understanding the real-life UI prototyping practices is much more essential and useful in our case. Then, depending on the prototyping fidelities named by the participant during the interview, each fidelity was discussed in more detail. During these interviews, we also gathered various documents regarding UI designs to analyze the documentation conventions of different UI/UX designers.

15.3 analysis

To analyze the data collected from our formative study, we followed the inductive analysis approach with affinity mapping from Grounded Theory methodology (Strauss et al., 1994). Using the open-coding approach, we developed an initial coding scheme based on our initial observations. Two coders independently coded two transcripts to refine the coding scheme. For further discussion, we used the affinity mapping technique to arrange the coding themes. Next, we iteratively checked another three transcripts individually. Aer a few iterations, both coders reached a near-perfect agreement (Cohen’s kappa, κ=0.81).

15.4 key findings

Following this process, we identified a few pain points in the workflow. These pain points are discussed successively. 126 formative study on prototyping for accessibility

Ghost Users

Our participants pointed out the inaccessibility of their target users and called them ghosts. Due to the delicate nature of their target users, it is a common situation that UX designers either have no or minimal access to their users throughout a project. Therefore, UX design- ers have to work with very little, if any, user data, which makes it difficult for UX designers to empathize with user’s frustrations and conceptualize solutions. They reported having to imagine what the user would need in most cases.

We make decisions based on assumptions, or what we think we know. - P11

“ Though I create user files, I end up having user documents that actually fit the people whom I know personally. “ - P2 In most of design projects, we speak with very limited number of people, and it is very hard to get the actual information from this limited amount of data. To create designs for a specific accessible population, we need to go through the available literature and “ do some research to understand the target group. This was really challenging. - P13

Most of the time, I get data from our research team, and I have to design for so and so target users. Like, I have never met them, so I can’t see them, means, I can’t see how they are and what is good for them or not good for them. “ - P20

User Documentation

Our participants conveyed that they are oen unsure about the necessity and prioritiza- tion of various aspects of user data. They mentioned that identifying the necessity and prioritization of users’ data is vital to figure out which aspects will influence the UI design. 15.4 key findings 127

I have data about the user group. But, I find it difficult to document it. I am not sure which characteristics of the target group should be documented. Not sure how to prioritize these characteristics in the document. “ - P11

A lot of information about the target group is unnecessary. Filtering out such unnec- essary details is hard. “ - P3

Accessible UI Design Illiteracy

Our participants pointed a general lack of literacy of UI design guidelines, especially for accessibility. They also reported that either the information is scattered and is not easy to access, or the guidelines are in a textual form, and there are no practical examples or rationale provided for them. Lastly, they emphasized that even in situations where they have both the user data and UI design guidelines at hand, they face problems in mapping one to the other.

Design language tells you what to do, how to do but not why to do it. - P1

“ ...the different kinds of information were scattered over different places. And the project was for a specific target group. “ - P14 Personally, I find it difficult to establish a connection between the user data and the UI design. Keeping up the connection is the hardest challenge. Remembering all the details about the users while creating UI design is really difficult. “ - P21 128 formative study on prototyping for accessibility

Time Constraints

Our participants stressed on the pressures they face due to tight project deadlines. They confessed that it directly affects the quality and depth of their UI design. In most cases, they end up ignoring the accessibility guidelines and produce a "minimum viable" solution.

I am not proud of it, but when I have a tight deadline, I just do the bare minimum. [UI] design still looks nice but it isn’t accessible. “ - P8 Because of time constraints, I couldn’t consider some accessibility issues as it involved a lot of implementation effort. Couldn’t accommodate all the accessible features, and the system that we developed became not fully usable by the accessible population. “ - P17

Accessibility for everyone is almost difficult. But, of course, everything is possible. With much work and time, it is possible. But, at the moment, it is difficult. “ - P19

15.5 identified needs

We conducted the formative study to identify the workflow and pain points of UX designers while designing accessible UIs. With the data collected, we aim to answer the question:

How can we make it easy for UX designers to design for accessibility?

Aer analysing the data collected, we identified the following user needs.

visibility of the user so that UX designers can empathize with them and realize the importance of accessible UI design.

discoverability of UI design guidelines for accessible UI design. Discoverability is the degree of ease with which the designers can find all the UI design guidelines when they first encounter a guidelines library. 15.6 summary 129 findability of UI design guidelines for accessible UI design. Findability is the ease with which designers can locate a specific UI design guideline. usability of UI design guidelines for accessibility so that they can be reflected easily in UI design.

15.6 summary

Our research aims at investigating the pain points of UX designers during their workflow of designing accessible user interfaces (UI). For this purpose, we surveyed 30 UX designers, conducted 21 follow-up interviews, and analyzed 32 user profiling and UI design documents. We identified the following pain points in our participants’ workflow: 1) limited access to target users; 2) uncertainty regarding the necessity of various aspects of users’ data; 3) lack of knowledge of UI design guidelines for accessibility; 4) time constraints leading to ignoring accessibility. To address these issues, we present Personify, a UI design guidelines library that organizes pre-existing UI design guidelines for accessibility with respect to personas. By introducing this library, we aim to assist UX designers in utilizing UI design guidelines for creating accessible UI designs.

16 PERSONA-BASEDUIDESIGN GUIDELINES

Our formative study on prototyping accessible UIs pointed out a few interesting pain points faced by UX designers during designing accessible UIs. To address these identified problems, we propose organizing UI design guidelines for accessibility with respect to the target user personas. Before delving into the details of this approach and how it aims to address the identified pain points, we would like to establish some background.

16.1 personas

Persona is a fictitious yet realistic representation of target users in terms of their goals and personality characteristics (A. Cooper, 1999). Personas do not necessarily need to be based on real data, but they need to be realistic (Pruitt and Adlin, 2005). Besides a realistic name, photo, and demographics, personas also convey abilities, aptitude, attitude, behavioral patterns, frustrations, emotions, goals, and motivations, among others. The unique aspect of a persona description is that it highlights the behavior and frustrations of the target user base specific to the domain in focus (Chisnell et al., 2005; Goodwin, 2009; Horton et al., 2014). There are four different perspectives regarding personas: goal-directed perspective (A. Cooper, 1999), role-based perspective (Grudin et al., 2002; Pruitt and Adlin, 2005), the engaging perspective (L. Nielsen, 2004), and the fiction-based perspective (Floyd et al., 2008). The first three perspectives agree that persona descriptions should be based on the collected data. However, the fourth perspective, the fiction-based perspective, does not include data as the basis for persona description but creates personas from the designers’ intuition and assumptions. These fiction based personas are also known as ad hoc personas, assumption personas, and extreme characters. Floyd et al. (2008) discussed three additional types of personas: 1) Quantitative data- driven personas are extracted from natural groupings in quantitative data. 2) User archetypes

131 132 persona-based ui design guidelines

as personas are similar to personas, but more generic, usually defined by role or position. 3) Marketing personas are created for marketing reasons and not to support design. Alan Cooper coined the term Persona and was the first to use it in the Design domain. In his book "The Inmates Are Running The Asylum" (A. Cooper, 1999), he introduced personas as hypothetical archetypes of target users. He claims that personas are the most effective and simplest way of developing precise descriptions of target users and their goals. Cooper strongly suggests that we should aim to focus our system’s design only on one primary persona. He believes that if we aim to create a system that satisfies a broad audience of users, we will satisfy no one (A. Cooper, 1999). Over time, personas have become a physical and logical design tool. Well-structured personas help the design team remember for whom they are designing and what they are designing. Personas that are based on prior user research or assumptions help in avoiding any preconceived notions during the design process (A. Cooper, 2003; S. Mulder et al., 2007). Aer winnowing down the population, UX designers typically end up with anywhere from three to seven relevant personas. These personas are further prioritized as primary, secondary, negative, concerned, and additional personas. Each persona is a single sheet of paper containing a name, picture, demographics, goals, frustrations, and oen telltale quotes. This one-page document becomes a ubiquitous part of the design process. These persona documents are printed out as the cast of characters and distributed at brainstorming sessions, design workshops, and stakeholder meetings. Every deliverable document that is created and given to the clients also has cast-of-characters pages in it. The goal is to make personas unavoidable (A. Cooper, 1999; Mao et al., 2005; Pruitt and Adlin, 2005). Using personas in the design process results in increasing the focus on users and their needs. Personas are considered a useful communication tool and have a significant design influence, such as leading to better design decisions and defining the product’s feature set (A. Cooper, 1999; A. Cooper et al., 2007; Grudin et al., 2002; Long, 2009; Ma et al., 2007; Miaskiewicz et al., 2011; Pruitt and Adlin, 2005). To evaluate the effectiveness of personas in user interface (UI) design, Long et al. con- ducted a study with multiple groups of participants who designed UIs using different method- ologies over five weeks. As per the results, UI designs created using personas and scenarios had better usability than others (Long, 2009). The results also suggested that using personas improved team communication and facilitated user-centered design. The authors claimed that "Personas strengthen the focus on the end-user, their tasks, goals, and motivation. 16.2 accessibilities 133

Personas make the end-user’s needs more explicit and thereby can direct decision-making within design teams more towards those needs." (Long, 2009).

16.2 accessibilities

Accessibility is the practice of making UIs usable and accessible by as many people as pos- sible. Based on the scenario, accessibility can be grouped into three categories: permanent, temporary, situational.

Figure 16.1: Types of Accessibilities - Situation Based (Microsoft Inclusive Design) 134 persona-based ui design guidelines

Within these categories, impairments are categorized into several other categories:

visual impairment like color blindness, partial blindness, complete blindness, night blindness, and photosensitivity.

auditory impairment includes ear-related conditions like complete hearing loss, partial hearing loss, low-tone hearing loss, and high-tone hearing loss.

speech impairment includes speech-related conditions like complete mutism, selective mutism, and stammering.

cognitive impairment includes mental limitations like dyslexia, dysgraphia, dyscalcu- lia, and autism. These limitations include difficulty in processing information, learning new things, or concentrating.

motor impairment includes movement-related limitations like loss of limbs, essential tremor, multiple sclerosis, and fat finger problem.

16.3 ui design guidelines for accessible uis

UI design guidelines for accessibility are intended to guide designers on how UI designs can be made accessible and usable for a diverse user base. There are a few existing collections of UI design guidelines for accessibility (Caldwell, M. Cooper, et al., 2008; Caldwell, Reid, et al., 2018; Chisholm et al., 2001; Chisnell et al., 2005; Goodwin, 2009; Horton et al., 2014; IBM Design Accessibility Handbook). Specifically, Web Content Accessibility Guidelines1 (WCAG) (Caldwell, Reid, et al., 2018) is a collection of UI design guidelines to make web content more accessible to people with disabilities. Here, web content generally refers to the information in a web page or web application, e.g., text, images, sounds, code, or markup that defines the structure, presentation, etc. The WCAG 2.1 guidelines are classified into four categories: Perceivable, Operable, Understandable, Robust.

1 https://www.w3.org/TR/WCAG21/ 16.4 proposed approach 135

User Agent Accessibility Guidelines2 (UAAG) documents explain how to make user agents accessible to people with disabilities. User agents include browsers, browser extensions, media players, readers, and other applications that render web content. The UAAG 2.0 guidelines are grouped into five categories: Perceivable, Operable, Understandable, Pro- grammatical Access, Specifications and Conventions.

16.4 proposed approach

To address these problems, we introduce the idea of organizing pre-existing UI design guidelines for accessibility with respect to personas. We believe that organizing UI design knowledge in such a way will make it easy for UX designers to locate desired knowledge and increase accessible UI design literacy. Consolidating UI design knowledge from various collections of accessibility guidelines will provide a unified repository to address the discov- erability and findability of UI guidelines. Besides providing textual information, UI design guidelines will be linked to sample GUIs to convey the practical application of UI design guidelines. Each sample GUI shall be marked with the relevant UI design guidelines, which will ensure mapping between personas and their respective accessibility guidelines. This approach uses personas to document the target users’ data in a standard format to make the users visible and relatable. Personas help in avoiding self-referential design and help maintain a common, well-defined user-centric focus (Pruitt and Grudin, 2003).

16.5 summary

We proposed a persona-based organization of UI design guidelines for accessibility to address the difficulties faced by UX designers during prototyping accessible UIs. To realize this approach, we aim to document various impairments in terms of personas and link them with the relevant UI design guidelines to have a consolidated collection that is rapidly accessible by designers.

2 https://www.w3.org/WAI/standards-guidelines/uaag/

17 PERSONIFY:PERSONA-BASEDUI DESIGNGUIDELINESLIBRARY

To realize our approach of persona-based organization of UI design guidelines for accessi- bility, we introduce Personify1,2, a persona-based UI design guidelines library (Shanmuga Sundaram, 2020). Personify is a web-based library to ensure rapid accessibility and platform independence (Figure 17.1). Personify consolidates UI design guidelines for accessibility from numerous academic and commercial sources and documents various personas in a standard format. These personas are associated with each other through various ac- cessibilities and are further connected to the applicable accessibility guidelines (Figure 17.8).

Personi fy Search Ammar - Music Teacher

Complete Blindness A completely blind individual is unable to see at all. It refers to the complete lack of form and light perception. It can be temporary or permanent.

visual Partial Blindness Partial Blindness can also be referred as Low Vision. The low visioned people cannot see things Complete Blindness clearly. The need assistive tech or tools to have better vision. Speech “ Music is for everyone. Technology is for everyone.” Color Blindness Auditory Ammar is married and has 2 little daughters. Color blindness happens when someone cannot distinguish He is a music teacher who teaches to play between certain colors such as 11 musical instruments. At the age of 13, he greens, reds and blues. It is also went blind. known as color deficiency. Motor He always want to inspire his students through new ways of creating music. He is planning to use podcasts and videos as apart of his online course work for his Night Blindness students. He travels a lot. He likes to listen Night Blindness (Nyctalopia) is the to podcasts realted to music while traveling. Cognitive condition characterized by an abnormal inability to see in dim light or at night, typically caused by vitamin A deficiency. Ammar hates websites that don’t allow him to download the audio version of articles. He dont always stay online in front of his computer.

On Focus WCAG 2.1 Predictable Any component that is able to trigger an P event when it receives focus must not change the cont ext.

Error Suggestion WCAG 2.1 Input Assistance If an input error is detected (via IA client-side or server-side validation), suggestions are provided for fixing the input.

Figure 17.1: Personify, Persona-based UI Design Guidelines Library

1 https://designwithpersonify.com/ 2 https://github.com/sarahsuleri/personify

137 138 personify: persona-based ui design guidelines library

17.1 collecting ui design guidelines for accessibility

Personify consolidates UI design guidelines from various sources (Caldwell, M. Cooper, et al., 2008; Caldwell, Reid, et al., 2018; Chisholm et al., 2001; Chisnell et al., 2005; Goodwin, 2009; Horton et al., 2014; IBM Design Accessibility Handbook; Jacobs et al., 2002; Miniukovich et al., 2017; Rabin et al., 2008). Each guideline is represented by a card that summarizes the guideline and links it with the URL of the detailed description of the guideline (Figure 17.2). Personify additionally represents all the guidelines in a graphical manner (Figure 17.3).

Figure 17.2: Personify summarizes the UI design guidelines.

(a) WCAG 2.1 (b) Understandable (c) Readable (d) Personas

Figure 17.3: Personify represents all the guidelines in a graphical manner 17.2 documentating personas 139

17.2 documentating personas

Personify documents personas according to the persona standard proposed by A. Cooper (2003) and S. Mulder et al. (2007), with a few modifications. Each persona has information regarding name, age, profession, location, accessibility, biography, quote, devices in use, frustration, and tech familiarity. Personify provides a short summary of each persona (Figure 17.4a) and a downloadable persona (Figure 17.4b). Personify additionally graphically visualizes all the personas (Figure 17.5).

Ammar Music Teacher With Complete Blindness 38, Erbil, Iraq

Bio Ammar is married and has two little daughters. He is a music teacher who teaches to play 11 musical instruments. He always wanted to inspire his students through new ways of creating music. He is planning to use podcasts and videos as part of his online course work for his students. Ammar loves to travels a lot. He likes to listen to podcasts related to music while traveling.

Ability Complete blindness Aptitude Limited Tech User Attitude 100% Audio Availability

Frustrations Ammar hates websites that do not allow him to download the audio version of articles. He does not always like to stay online, so he prefers to have content available offline. “ Music is for everyone. Technology is for everyone.”

(a) Persona Sum- (b) Downloadable Sample Persona mary

Figure 17.4: Personify provides a sample persona, "Ammar - the Music Teacher with Complete Blindness".

17.3 validating personas

We validated our persona collection by expert validation. For this purpose, we recruited 3 UX experts with more than ten years of prior experience. The validation criteria determine whether a persona is adequately described and contains the relevant characteristics. If a persona fulfills the criteria, we considered it eligible to be included in Personify. 140 personify: persona-based ui design guidelines library

Figure 17.5: Personify graphically visualizes all the personas

17.4 categorizing personas into imapairments

Personify categorizes personas with respect to visual, auditory, speech, cognitive, and motor impairments. Personify documents impairments as a card with the name, description, and a representative image (Figure 17.6).

Figure 17.6: Personify documents impairments as a card with the name, description and a repre- sentative image. 17.5 connecting personas & guidelines 141

Each persona represents a fictitious person that has an impairment. These personas are grouped based on the category of their impairment. Personify visualizes this categorization using a graph (Figure 17.7).

Figure 17.7: A graph connecting color blindness with the relevant personas.

17.5 connecting personas & guidelines

Personify connects various personas with the UI design guidelines that are applicable based on the impairment represented by the persona. For each persona, we outlined the personal characteristics and a list of applicable UI design guidelines. Figure 17.8 illustrates the knowledge architecture of Personify and shows the relationship between personas and the corresponding UI design guidelines. This approach supports locating UI design guidelines based on related personas and impairments. 142 personify: persona-based ui design guidelines library

Figure 17.8: Personify visualizes each persona and the relevant UI design guidelines in a graphical manner.

17.6 summary

Personify is a web-based persona and UI design guidelines library. The main idea behind Personify is to provide designers quick access to UI design knowledge in a unified repository. Therefore, we created various personas and collected UI design guidelines from various sources. Each persona and guideline in Personify is further visualized in a graphical manner. For rapid access, we grouped personas in 5 different categories based on visual, auditory, speech, cognitive, and motor impairments. To enhance persona utilization, we additionally provide downloadable versions of each persona. 18 USABILITYEVALUATIONOFPERSONIFY

Aer the implementation of Personify, we proceeded with conducting its usability evaluation using the System Usability Scale (SUS) (Brooke, 1996). This study aimed to quantitatively evaluate the usability of the features, interactions, navigation, content, and UI design of Personify (Shanmuga Sundaram, 2020).

18.1 study design

We used purposive and snowball sampling to recruit 16 participants (9 UX designers, 7 Product Designers). Our recruitment criteria for participation in this study required partici- pants to have at least one year of prior UI prototyping experience for accessibility. All the 16 participants (F=10, M=6) were 25.63 ± 3.51 (22 - 31) years old and had 2.14 ± 1.38 (1 - 5) years of prior prototyping experience (Figure 18.1). These participants had not previously participated in the formative study. Participants were compensated for their participation.

4-5 years 12.5% Product 1-2 years Designers 3-4 years 37.5% UX 18.8% Designers

2-3 years 31.3%

(a) Occupation (b) Prior Experience

Figure 18.1: Demographics and prior experience of participants of the usability study of Personify.

143 144 usability evaluation of personify

The study took place in a quiet room. Each participant was provided with a laptop, a stylus and a mouse to create their prototypes using Personify. The study was conducted by one primary facilitator and one secondary facilitator (note-taker). As part of the usability evaluation, participants were asked to create the accessible UI design and behavior of a shopping application for Android as a digital prototype using Personify. We began the study by introducing the purpose of the study and requested the partici- pants to provide informed consent and demographic information. We then introduced the participants to the task. Participants created their prototypes in a lab setup and provided feedback using a think-aloud protocol. Once the participants had finished their task, they were asked to fill out the SUS questionnaire. Each evaluation took ~45 minutes.

18.2 results

Personify scored an average of 76.4 points out of 100, which denotes an above average level of usability. Overall, Personify was perceived as an easy to use and beneficial resource to assist UX designers during their UI prototyping process for accessibility. Figure 18.2 shows the mean responses to each part of the SUS questionnaire.

Knowledge

Strongly Strongly Disgree Agree

Figure 18.2: SUS mean responses for Personify 18.2 results 145

We further analyzed the responses to each part of the questionnaire.

Frequency of Use

Since most participants were UX and product designers, they indicated a willingness to use Personify frequently during accessible UI prototyping. Two participants (12.5%) strongly agreed, and nine participants (56.3%) agreed that they would like to use this system fre- quently (Figure 18.3). However, Five participants (31.3%) were neutral and none disagreed with the statement.

Figure 18.3: Participants’ preference for frequency of using Personify according to SUS.

Complexity

Fourteen participants (87.5%) did not consider the design, interactions, navigation, and content of Personify unnecessarily complicated. Three participants (18.8%) strongly dis- agreed, and eleven participants (68.8%) disagreed that they found the system unnecessarily complex (Figure 18.4). However, one participant (6.3%) was neutral and one participant (6.3%) agreed with the statement.

Figure 18.4: Participants’ perception of complexity of Personify according to SUS. 146 usability evaluation of personify

Ease of Use

A majority of the participants (93.75%) considered Personify easy to use. Three participants (18.8%) strongly agreed and twelve participants (75%) agreed that the system was easy to use (Figure 18.5). However, one participant (6.3%) was neutral regarding the statement.

Figure 18.5: Participants’ perception of ease of using Personify according to SUS.

Need of Technical Support

Almost all participants (81.25%) were confident that they would not need any technical assistance while using Personify. Six participants (37.5%) strongly disagreed, and seven participants (43.8%) disagreed that they would need the support of a technical person to be able to use this system. However, two participant (12.5%) were neutral and one participant (6.3%) agreed with the statement (Figure 18.6).

Figure 18.6: Participants’ perception of need of any technical support while using Personify accord- ing to SUS. 18.2 results 147

Integrity

There were no negative remarks regarding the integrity of the library. Five participants (31.3%) strongly agreed, and eight participants (50%) agreed that the various features in Personify were well integrated. However, three participants (18.8%) were neutral regarding this statement (Figure 18.7).

Figure 18.7: Participants’ perception of how well integrated Personify is according to SUS.

Inconsistency

A majority of participants (87.5%) disagreed that there was too much inconsistency in the system. Five participants (31.3%) strongly disagreed, and nine participants (56.3%) disagreed with the statement. However, two participants (12.5%) were neutral, none agreed with the statement (Figure 18.8).

Figure 18.8: Participants’ perception of design inconsistencies in Personify according to SUS. 148 usability evaluation of personify

Ease of Learning

All of the participants agreed that most people would learn to use this system very quickly. Eight participants (50%) strongly agreed, and five participants (31.3%) agreed with the statement (Figure 18.9). However, one participant (6.3%) strongly disagreed, one participant (6.3%) disagreed and the remaining one participant (6.3%) was neutral.

Figure 18.9: Participants’ perception of ease of learning Personify according to SUS.

Difficulty of Use

A majority of the responses (81.25%) reflect that participants did not find Personify very cumbersome to use. Five participants (31.3%) strongly disagreed, and eight participants (50%) disagreed with the statement (Figure 18.10). However, three participants (18.8%) were neutral.

Figure 18.10: Participants’ perception of difficulty of using Personify according to SUS. 18.2 results 149

Confidence in Use

More than half of the participants (56.2%) felt confident in using Personify for accessible UI prototyping. Four participants (25%) strongly agreed and five participants (31.3% ) agreed with the statement (Figure 18.11). However, two participants (12.5%) disagreed and five participants (31.3%) were neutral regarding their confidence in using Personify.

Figure 18.11: Participants’ perception of their confidence in using Personify according to SUS.

Need of Prior Knowledge & Experience

Since our participants had prior experience in accessible UI prototyping, most of them (87.5%) indicated that they did not need to learn a lot of things before they could get going with the system. Seven participants (43.8%) strongly disagreed, and seven participants (43.8%) disagreed with the statement (Figure 18.12). However, one participant (6.3%) re- mained neutral and one participant (6.3%) agreed with the statement.

Figure 18.12: Participants’ perception of the need of prior knowledge and expertise in using Per- sonify according to SUS. 150 usability evaluation of personify

18.3 summary

Personify scored 76.4 on SUS which indicates overall above average usability. Participants found Personify easy to use, and they showed an inclination towards using Personify fre- quently to create accessible UI prototypes. Since all the participants had prior knowledge and experience regarding accessible UI prototyping, most of them thought they did not require to learn a lot before using Personify. They found various features of Personify well-integrated and did not find the library unnecessarily complex. They felt confident in using Personify, and therefore, they did not require any technical support while using Personify. 19 WORKLOADEVALUATIONOF PERSONIFY

We furthered our research by investigating accessible UI prototyping from the perspective of subjective workload (Shanmuga Sundaram, 2020). Here, workload refers to the perceived level of physical and cognitive burden experienced by the UX designers during accessi- ble UI prototyping (Gore, 2010). Similar to our approach with traditional UI prototyping, we followed the subjective numerical measurement technique using NASA Task Load In- dex (NASA-TLX) (Gore, 2010; S. G. Hart et al., 1988) to evaluate the subjective workload experienced during accessible UI prototyping.

19.1 rationale for study

This study aims to quantitatively measure and compare the subjective workload experienced by UX designers following the traditional approach versus the persona-driven approach for accessible UI prototyping. Here, the term traditional approach denotes the usual accessible UI prototyping tools and techniques used by UX designers in practice. Whereas, the persona- driven approach denotes utilizing UI design guidelines organized with respect to personas provided by Personify during accessible UI prototyping. To summarize, this study shall (i) help understand the subjective workload experienced by UX designers during accessible UI prototyping and (ii) evaluates the impact of using the persona-driven approach (Personify) on the workload of accessible UI prototyping.

19.2 null hypothesis

We formulated our null hypothesis structured in terms of the subjective workload experi- enced by UX designers during accessible UI prototyping.

H0 There is no difference between the subjective workload of accessible UI prototyping using the traditional approach and persona-driven approach (Personify).

151 152 workload evaluation of personify

The study designed to test our hypothesis is explained in the following sections.

19.3 participants

In total, 32 participants (16 male, 16 female) took part in the workload study. Participants were 26±3.21 (20 - 34) years old, had 2.5±1.12 (1 - 5) years of prior accessible UI prototyping experience, and were a mix of 22 UX designers (37.5%), 10 product designers (31.25%) (Table 19.1, Figure 19.1). We ensured that none of these participants were previously a part of the Usability Study. As a prerequisite, we ensured that none of the participants had used Personify for accessible UI prototyping previously. All participants were compensated for their participation.

Product 4-5 years 1-2 years Designers 21.9% 25.0%

UX Designers 3-4 years 25.0% 2-3 years 28.1%

(a) Occupation (b) Prior Experience

Figure 19.1: Demographics and prior experience of participants of the workload study on accessible UI prototyping.

19.4 study design

The study consisted of two groups of participants (Experimental, Control). Participants were randomly assigned to the experimental or control group with the constraint that each group had an equal distribution of participants based on their gender and prior prototyping experience. Therefore, we carefully selected 32 participants to ensure the equal distribution of gender and prior experience in both groups (Figure 19.2). The experiment 19.5 apparatus 153

4-5 years 1-2 years 4-5 years 1-2 years 18.8% 25.0% 25.0% 25.0%

3-4 years 25.0% 3-4 years 2-3 years 2-3 years 25.0% 25.0% 31.3%

(a) Experimental (b) Control

Figure 19.2: Distribution of participants based on their prior experience for the workload study on accessible UI prototyping. group contained ne=16 participants (F=8, M=8) with 2.88 ± 1.2 (1 - 5) years of prior accessible UI prototyping experience. Similarly, the control group contained nc=16 participants (F=8, M=8) with 2.54 ± 1.03 (1 - 5) years of prior accessible UI prototyping experience. We compiled a list of eight distinct task categories: E-commerce, Booking, Food, Music, News, Photos, Social Media, Weather (Appendix b). Each task category consists of three features. Participants belonging to both groups were assigned a random task category and an impairment of the target user base. In total, participants had three hours to create accessible UI prototypes based on the assigned task category and the impairment. They built their UI prototypes in a lab setup. During these three hours, if the participants had any questions regarding the study, they could ask the moderator to clarify. The experimental group had to use a persona-driven approach to create their accessible UI prototypes. In contrast, the control group had the independence to follow the traditional approach of accessible UI prototyping. The control group was restricted from using a persona-driven approach during accessible UI prototyping. In both cases, the UI design decisions and prototyping tools were le solely up to the participants.

19.5 apparatus

The study was performed in a quiet room. Participants were provided with a table and a comfortable chair. Additionally, they were provided with a laptop, stylus, and mouse to 154 workload evaluation of personify

create their prototypes. Prior to the study, we had already installed all the commonly used UI prototyping tools (Table 3.1, 3.2, 3.3) in the laptop. Participants were informed regarding the installed prototyping tools. They were also given the freedom to install any new tools if need be. No participants installed a new tool; however, a few participants used different web applications during their task.

19.6 measurements

Between the experimental and control group, we analyzed the variations in physical de- mand, mental demand, temporal demand, performance, effort, frustration, and the overall subjective workload experienced during accessible UI UI prototyping using the NASA-TLX questionnaire (Pandian and Sarah. Suleri, 2020).

19.7 task categories

Participants from both experimental and control groups were randomly assigned one of the eight task categories to prototype an Android smartphone application: shopping, booking, food, music, news, photos, social media, and weather (Appendix b).

19.8 procedure

Aer a brief introduction to the workload study, participants were asked to provide informed consent and demographic information. Before we introduced the participants with the assigned task categories and the impairment of the target user base, they were asked to complete a reference task such as a mental calculation, and then assess its workload using the NASA-TLX questionnaire. Reference tasks help decrease the between-groups variability by better calibrating participants with the various dimensions of NASA-TLX (Gore, 2010; S. G. Hart et al., 1988). We used an NASA-TLX web application (Pandian and Sarah. Suleri, 2020) to capture the participant’s workload. Before starting each task, we gave a verbal description of the assigned task category, its features, and the impairment of the target user base. In addition, the participants belonging 19.8 procedure 155

Experience Group Participant Age Gender Profession Application Type Impairment (years) CP1 24 Male 3 UX designer Social Media Complete blindness CP2 27 Male 2 UX designer News Selective Mutism CP3 24 Female 2 UX designer Booking Partial deafness CP4 26 Female 3 UX designer Weather Essential tremor CP5 27 Female 1 UX designer News Dyscalculia CP6 32 Female 4 Product designer Food Night blindness CP7 28 Male 1.5 UX designer Social Media Aphasia CP8 23 Female 3 Product designer Photos Low tone hearing loss CP9 25 Female 4 UX designer Music Lou Gehrig disease CP10 28 Male 1.5 UX designer Music ADHD

Control (traditional) CP11 32 Male 1 UX designer Weather Partial blindness CP12 31 Male 3.5 UX designer Photos Stammering CP13 29 Male 2.5 Product designer Food High tone hearing loss CP14 29 Female 3 Product designer E-commerce Muscular dystrophy CP15 26 Male 1 UX designer Booking Dysgraphia CP16 26 Female 3 Product designer E-commerce Deuteranomaly E1P1 25 Female 2 UX designer Weather Complete mutism E1P2 26 Female 3 UX designer Food Complete deafness E1P3 30 Female 4 Product designer News Loss of limbs E1P4 24 Male 1 UX designer Music Anxiety E1P5 28 Female 3.5 UX designer Social Media Dyspraxia E1P6 23 Male 1.5 UX designer Photos Down syndrome E1P7 31 Male 3 Product designer Booking Autism E1P8 23 Male 1.5 UX designer E-commerce Dyslexia E1P9 28 Male 2.5 Product designer Music Autism E1P10 31 Female 1 UX designer Booking Memory loss E1P11 29 Female 3 UX designer Photos Tritanomaly Experimental (personas) E1P12 30 Male 4 UX designer E-commerce Cerebral palsy E1P13 29 Male 3 Product designer Weather Arthritis E1P14 21 Male 1 UX designer Food Spinal cord injury E1P15 32 Female 4 Product designer Social Media Spina Bifida E1P16 23 Female 1.5 UX designer News Language limitation

Table 19.1: Participant details and assigned application types for accessible UI prototyping for workload analysis to the experimental group were also given a thorough introduction to the persona-driven approach to accessible UI prototyping and Personify. Participants were given time to get acquainted with Personify and try it out in advance. We answered any questions asked by the participants. Once the participants were comfortable with using the library, they proceeded with creating their accessible UI prototypes according to the assigned task category and the impairment of the target user base. The participants belonging to the control group were restricted from using the persona-driven approach to create accessible UI prototypes according to the assigned task category. They could use any existing UI prototyping tools and techniques to create their accessible UI prototypes. 156 workload evaluation of personify

Once the participants were clear about the instructions, they were given three hours to create an accessible UI prototype according to the task category, the impairment, and the UI prototyping approach assigned to them. While the participants performed their task, the moderator made notes based on the qualitative comments and observations. As soon as the participants were finished with the task, they were invited for follow-up interviews in a semi-structured format (~15 min) to share their qualitative feedback and complete the NASA-TLX questionnaire regarding their overall experience during the task. One primary interviewer and one secondary interviewer (note-taker) conducted these inter- views. The interviews were audio-recorded and later transcribed. During these interviews, we also collected the UI prototypes created by the participants.

19.9 analysis

We collected data from two independent groups of participants (Experimental, Control) using the NASA-TLX questionnaire. NASA-TLX uses an ordinal scale to capture subjective values from participants. The variance of data collected using the persona-driven approach and the traditional approach is non-homogeneous. Therefore, to test for significant differ- ences in the individual dimensions, we used the Mann Whitney U two-tailed test (Mann et al., 1947) (level of significance < 0.05) for paired samples. We also collected qualitative feedback from participants during follow-up interviews. To analyze this data, we followed the inductive analysis approach of the Grounded Theory methodology (Strauss et al., 1994). Using the open-coding approach, we developed a coding scheme based on our initial observations. Two coders independently coded four transcripts (two transcripts per group) to refine the coding scheme. For further discussion, we used the affinity mapping technique to arrange the coding themes. Next, we iteratively checked another two transcripts individually. Aer a few iterations, both coders reached a substantial agreement level (Cohen’s kappa, κ=0.74). In the following sub-sections, we provide an analysis of the collected data, structured in terms of subjective workload, physical demand, mental demand, temporal demand, performance, effort, and frustration. 19.10 results & discussion 157

19.10 results & discussion

Overall, the average subjective workload experienced by participants using the persona- driven approach (Personify) (M=29.7,SD=16.8) was significantly less than the average subjec- tive workload experienced using traditional approach (M=47.7,SD=25.11) (Figure 19.3, 19.4 & Table 19.3). This difference was statistically significant (U=71.00, nc=ne=16, p=0.016) and hence, H0 is rejected.

80

60

40 Average Workload 20

0

Traditional Personify Prototyping

Figure 19.3: Comparison of average workload experienced by participants of workload study during accessible UI prototyping using traditional versus persona-driven approach (Personify).

Prototyping Traditional 200 Personify

150

100 Adjusted Rating

50

0 Physical Demand Mental Demand Temporal Demand Performance Effort Frustration Level Measurements

Figure 19.4: Comparison of physical demand, mental demand, temporal demand, performance, effort, and frustration experienced by participants of workload analysis during ac- cessible UI prototyping using traditional approach versus persona-driven approach (Personify). 158 workload evaluation of personify

The qualitative feedback from the participants of the experimental group revealed that two factors played a significant role in decreasing the overall workload of accessible UI prototyping.

1. All-in-one library: Participants from the experimental group reported that it was convenient for them to have all the information regarding personas, accessibility, and UI design guidelines in one place. They also pointed out that Personify would be useful for them in improving their design knowledge and skills. (Experimental group, n=13,81.25%)

I have always found W3CAG library to be very theoretical, all textual information. But, I like that here (Personify), I could see people, their problems, and how I can design for them. It was surely helpful. “ - E2P7

2. Visibility of Users: Participants from the experimental group reported that as they had access to the personas representing target user base and the details of their impairments, it helped them empathize with the users better (Experimental group, n=15, 93.75%)

I could see them (the users). I liked that. I have used personas before as a part of user research but using it with guidelines was brilliant. “ - E1P2

Traditional Personify Significance

Workload 47.7 29.7 U=71.0, p=0.016 Physical demand 70.0 59.37 U=120.0, p=0.386 Mental demand 163.75 85.0 U=84.00, p=0.049 Temporal demand 73.12 54.37 U=99.00, p=0.139 Performance 78.125 73.75 U=106.50, p=0.212 Effort 180.0 112.5 U=79.00, p=0.033 Frustration 150.62 61.87 U=91.00, p=0.078

Table 19.2: Comparison of subjective workload, physical demand, mental demand, temporal de- mand, performance, effort and frustration of using persona-driven (Personify) and traditional approach to accessible UI prototyping. 19.10 results & discussion 159 6 10 76 78 70 24 34 44 44 30 32 32 32 32 36 22 28 86 25.1 0.67 16.8 8.67 5.33 47.71 (100) 29.79 47.33 72.67 71.33 71.33 44.67 46.67 28.67 31.33 26.67 35.33 33.33 Workload Rating Frustration Level 0 2 0 0 20 0 0 0 1010 0 1 0 10 10 5 50 10 1 10 10 1 10 70 4 280 3030 2 0 60 0 30 030 0 130 30 330 90 0 0 40 5 200 40 3 120 20 020 0 320 60 0 0 2020 1 0 20 20 0 1 20 2020 0 0 0 0 80 5 400 60 4 240 9090 490 5 360 2 450 180 90 4 360 90 5 450 (100) (5) (500) 43.75 2.62 150.62 22.43 1.59 122.0 34.62 2.0 167.19 26.88 1.38 61.88 Scale Weight Adjusted Rating Effort 0 2 0 0 5 0 10 3 30 1010 3 4 30 40 70 4 280 70 4 280 70 3 210 30 3 90 30 23030 60 2 4 60 30 120 30 2 3 60 90 40 4 160 40 4 160 50 4 200 50 3 150 20 2 40 20 520 100 2 40 80 4 320 60 360 180 2 120 60 4 240 6060 360 2 180 4 120 240 90 390 270 4 360 90 2 180 90 3 270 (100) (5) (500) 24.14 1.05 82.58 28.74 0.83 105.96 54.38 3.19 180.0 36.88 3.19 112.5 Scale Weight Adjusted Rating Performance 0 50 0 3 0 0 5 0 1010 2 5 20 10 50 1010 5 3 1 50 10 30 10 510 50 4 40 1010 5 510 50 50 5 50 10 5 50 70 0 0 70 2 140 30 5 150 3030 3 1 90 30 30 3 90 30 5 150 40 0 0 40 4 160 20 4 80 2020 5 5 100 100 20 42020 80 4 5 80 20 100 520 100 4 80 90 5 450 17.69 0.95 50.32 (100) (5) (500) 19.38 4.31 73.75 23.35 1.92 107.47 26.25 3.31 78.12 Scale Weight Adjusted Rating Temporal Demand 0 2 0 1010 4 410 40 40 110 10 0 0 1010 1 2 10 10 20 210 20 0 0 70 2 140 70 3 210 70 1 70 3030 2 1 60 30 30 130 30 3 90 3030 430 3 120 2 90 60 40 3 120 40 3 120 40 140 40 3 120 50 1 50 50 3 150 20 2 40 20 3 60 20 220 40 1 20 60 2 120 60 1 60 60 1 60 13.6 1.06 46.76 40.0 1.94 73.12 (100) (5) (500) 23.75 2.06 54.38 23.38 1.18 54.98 Scale Weight Adjusted Rating Data collected using NASA-TLX for accessible UI prototyping. 19.3: Mental Demand 00 0 4 0 0 0 0 0 1010 4 310 40 10 30 1 3 10 30 7070 4 2 280 140 70 4 280 70 470 280 2 140 3030 3 1 90 30 40 4 160 50 2 100 50 3 150 20 120 20 3 60 20 320 60 2 40 60 380 180 3 240 60 1 60 80 4 320 60 1 60 60 1 60 9090 1 1 90 90 90 4 360 90 2 180 100 4 400 (100) (5) (500) 59.38 2.56 163.75 27.29 1.25 76.51 32.55 1.41 132.46 36.25 2.31 85.0 Scale Weight Adjusted Table Rating Physical Demand 00 0 0 0 0 0 00 0 0 40 0 0 0 0 0 10 010 0 010 0 10 1 1 10 10 10 2 20 10 010 0 010 0 310 30 1 10 70 3 210 70 3 210 30 1 30 30 2 60 30 2 60 3030 4 4 120 120 40 3 120 40 1 40 50 2 100 20 0 0 20 1 20 80 280 160 2 160 80 3 240 80 2 160 60 3 180 25.0 1.75 59.38 (100) (5) (500) 33.12 1.38 70.0 29.83 1.15 83.27 25.82 1.57 73.53 Scale Weight Adjusted ID SD SD

S3E11 S3C11 S3E14 S3E15 S3E16 Mean Mean S3E10 S3E01 S3E13 S3E12 S3C14 S3C15 S3C16 S3C10 S3C01 S3C13 S3C12 S3E07 S3E04 S3E03 S3E05 S3E02 S3E08 S3E06 S3C07 S3E09 S3C04 S3C03 S3C05 S3C02 S3C06 S3C08 S3C09

oto (Manual) Control xeietl(Personify) Experimental 160 workload evaluation of personify

The average mental demand (U=84.00, nc=ne=16, p=0.049) and effort (U=79.00, nc=ne=16, p=0.033) using persona driven approach (Personify) was significantly less than the traditional approach (Figure 19.4). However, there is no significant difference between physical demand (U=120.00, nc=ne=16, p=0.386), temporal demand (U=99.00, nc=ne=16, p=0.139), performance (U=106.50, nc=ne=16, p=0.212), and frustration (U=91.00, nc=ne=16, p=0.078). The qualitative feedback revealed that participants using the traditional approach had access to the UI design guidelines but they lacked the context and visibility of their target user base (n=13,81.25%). They also reported searching for textual content before starting to design their accessible UI prototypes (n=11,68.75%) resulted in an increase in the physical demand and effort of accessible UI prototyping using the traditional approach. Whereas, Personify increased the findability and discoverability of the UI design guidelines, which helped in decreasing the physical demand and effort participants experienced during accessible UI prototyping using the persona-driven approach (Personify) (n=14,87.5%).

The thing that takes a lot of work is to read through so much text, and I don’t even know if it (UI design guideline) is even applicable to the user I am designing for. “ - C1P2 I like that it was quick and easy to find stuff (guidelines). - E1P6 “Participants using the persona-driven approach reported that they did not need to learn a lot before adopting this approach as it aligned which their traditional workflow of looking at personas before starting to design something (n=9,56.25%). Overall, newbie UI/UX designers expressed that they felt more confident in the accessibility of their designs following the persona-driven approach (n=6,37.5%). However, there is no correlation (rs=-0.03) between the subjective workload experienced and the prior accessible UI prototyping experience of our participants. A common concern reported by participants using the persona-driven approach was regarding the findability of personas in persona libraries (n=5,31.25%). It was suggested to tag personas with multiple relevant keywords to increase their discoverability, findability, and usability. 19.11 summary 161

I was looking for a persona, but I didn’t know the name for it. So I searched for the impairment instead. “ - E2P6

19.11 summary

We investigated the impact of using a persona-driven approach (Personify) on the work- load of accessible UI prototyping of smartphone applications. The study revealed that the subjective workload experienced by UX designers using a persona-driven approach (Per- sonify) is significantly less than the workload experienced using the traditional approach of accessible UI prototyping. Specifically, there is a significant decrease in mental demand and effort of accessible UI prototyping while using persona-driven prototyping.

20 LIMITATIONS&FUTUREWORK

Our research aims to broaden the awareness and usage of personas and UI design guidelines for accessibility. More specifically, Personify aims to develop a deeper understanding of the purpose of each UI design guideline and how various UI design guidelines can work together with personas to assist UI designers and developers during the design and development process. Besides the promising results of usability and workload analysis described in the pre- vious chapters, our research on adapting the persona-driven approach to accessible UI prototyping using Personify has some limitations that we aim to address in our future work. Currently, Personify contains over a six hundred UI design guidelines for accessibility, more than fiy personas regarding visual, auditory, speech, motor, and cognitive impair- ments. In the future, we aim to add more personas into our library and further expand the library by integrating more UI design guidelines. A common concern reported by participants using the persona-driven approach was regarding the findability of personas in Personify. In the future, we aim to tag personas with multiple relevant keywords and, additionally, introduce text based search to increase the discoverability, findability, and usability of personas. With Personify, we organized multiple UI design guidelines for accessibility with respect to various personas. Currently, designers can download these personas and customize them in various prototyping tools. In the future, we aim to expand Personify to contain informa- tion regarding the interconnection between various personas and UI design guidelines. Currently, Personify provides personas for UI design guidelines. In the future, we aim to provide the code examples for each UI design guideline to assist the front-end development process. This expansion aims to support and encourage designers and developers to utilize UI design guidelines for accessibility in their everyday design tasks.

163

21 CONCLUSION

Prototyping of any soware involves evolving a UI concept into various stages of design such as low, medium, and high fidelity prototypes. This research analyzed this process from three aspects: traditional UI prototyping, rapid prototyping, and prototyping for accessibility. Based on our analysis, we proposed three approaches to address UI/UX designers’ pain points during their prototyping workflow. These approaches include automation of fidelity transformation (Eve, SUS = 89.5), UI design pattern-driven prototyping (Kiwi, SUS = 77.6), and persona-driven approach for accessible UI designs (Personify, SUS = 76.4). Furthermore, we studied the impact of using these three novel approaches on the UI/UX designers’ subjective workload (NASA-TLX) during the soware prototyping process. Our workload analysis revealed that unlike the traditional prototyping approach, Eve’s comprehensive support caused a significant decrease in subjective workload experienced by UI/UX designers using the comprehensive approach offered by Eve. Also, there was a significant decline in mental demand, temporal demand, and effort experienced by UI/UX designers using Eve. Compared to the traditional approach, the overall perceived performance increased five times using the comprehensive approach (Eve). Similarly, the subjective workload experienced by UI/UX designers using the pattern- driven approach using Kiwi was significantly less than the workload experienced using the traditional approach of rapid prototyping. Specifically, there is a significant decrease in physical demand and effort of rapid prototyping while using the pattern-driven approach. However, there is no significant difference in subjective workload experienced while using UI design pattern libraries with and without pattern standard. Lastly, the subjective workload experienced by UI/UX designers using the persona-driven approach offered by Personify was significantly less than the workload experienced using the traditional approach of prototyping for accessibility. Specifically, there was a significant decrease in mental demand and effort of prototyping accessible UIs while using Personify. This work aimed to extend prior work on UI prototyping. It is broadly applicable to understand the impact of using deep learning, UI design patterns, and personas on the workload of UI prototyping.

165 a UIPROTOTYPINGTOOLSREVIEW

In the last 25 years, numerous commercial tools were also introduced in addition to aca- demic prototyping artifacts. Some of these tools are very popular among UI/UX designers. Table a.1 compares, contrasts, and highlights the most interesting features of 140 commer- cial prototyping tools. These features include the support for creating designs, defining behavior, and conducting evaluations of lo-fi, me-fi, and hi-fi prototypes.

167 168 ui prototyping tools review Continued on next page Code Edit Asset support Images Export Me-Fi Hi-Fi Lo-Fi Overview of commercial prototyping tools. a.1: Table Design Interaction Storyboard Design Interaction Design Interaction Freehand Upload sketching Images ) ) ) ) ) ) ) ) ) ) ) ) ) ) Blend Adobe Photoshop ) Adobe Illustrator ) ) 1995 ) ) ) ) ) Adobe Comp ) Android Studio ) Bamboo Paper DesignerVista Conceptdraw.com ) Adobe Sketch ) ) ) Adobe Muse CC Adobe Draw ) Draw Island CanvasFlip App Concept.ly Adobe XD Atomic.io Appery.io Balsamiq Antetype Avocode App ArtFlow Blender.org PaintCode Boords Codiqa Do Ink Coggle Cacoo Axure RP Blend in Visual Studio ( Blender ( Boords ( Cacoo ( CanvasFlip ( Codiqa ( Coggle ( Concept.ly ( ConceptDraw ( DesignerVista ( Do Ink ( Draw io ( Drawisland ( Adobe Muse ( Adobe Photoshop CC ( Adobe Sketch ( Adobe XD ( Android Studio ( Antetype ( Appery.io ( ArtFlow ( Atomic.io ( Avocode ( Axure ( Balsamiq ( Bamboo Paper ( Tools Adobe Comp CC ( Adobe Draw ( Adobe Illustrator CC ( ui prototyping tools review 169 Continued on next page Code Edit Asset support Images Export Me-Fi Hi-Fi Lo-Fi Overview of commercial prototyping tools. a.1: Table Design Interaction Storyboard Design Interaction Design Interaction Freehand Upload sketching Images ) ) ) ) ) ) ) ) ) ) ) ) ) ) GUI Design Studio ) ) Google Drawings ) iPhone Mockup Gravit Designer ) InVision Studio ) ) ) Indigo.Design ) Ionic Creator Fast Mockup Adobe InDesign ) flairbuilder ) ) ) ) FlowMapp Frame Box ) Inkscape Evernote ) InVision Studio HotGloo FluidUI Framer Tumult Hype ForeUI Froont Figma Flinto iPlotz Gliffy GIMP forms.app irise jBart InDesign CC ( Indigo Studio ( Inkscape ( InVision ( Invision Studio ( Ionic creator ( iPhone Mockup ( iPlotz ( iRise ( jBart ( FluidUI ( ForeUI ( Form ( FrameBox ( Framer ( Froont ( Gimp ( Gliffy ( Google Drawings ( Gravit Designer ( GUI Design Studio ( HotGloo ( Hype 3 ( Tools Evernote ( Fast Mockup ( Figma ( Flairbuilder ( Flinto ( FlowMapp ( 170 ui prototyping tools review Continued on next page Code Edit Asset support Images Export Me-Fi Hi-Fi Lo-Fi Overview of commercial prototyping tools. a.1: Table Design Interaction Storyboard Design Interaction Design Interaction Freehand Upload sketching Images ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) 2018 ) ) Neonto Studio ) Mockup Designer ) Mockup Screens Kite Compositor Mockup Builder ) Microsoft PowerPoint ) ) ) ) ) MockupTiger Wireframes ) ) MindMeister Mockingbird Justinmind MockingBot NinjaMock Mockup.io Lucidchart MockFlow Mockplus ) MS Paint LayoutIt! Koncept Keynote - Apple Naview Lunacy Myoats Moqups Macaw Marvel Lumzy Miro MockingBot ( Mockplus ( Mockup Designer ( Mockup Tiger ( Mockup.io ( Mockup Builder ( Mockup Screens ( Koncept ( Kony Visualizer (Inc, Layoutit! ( lucidchart ( Lumzy ( Lunacy ( Macaw ( Marvel ( MS PowerPoint ( MindMeister ( Miro ( Mockflow ( Mockingbird ( Tools Just in mind ( Keynote ( Kite Compositor ( Moqup ( MS Paint ( myoats ( Naview ( Neonto Studio Pro ( NinjaMock ( ui prototyping tools review 171 Continued on next page Code Edit Asset support Images Export Me-Fi Hi-Fi Lo-Fi Overview of commercial prototyping tools. a.1: Table Design Interaction Storyboard Design Interaction Design Interaction Freehand Upload sketching Images ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) ) Pencil Madness ) PowerMockup ) ) ) Scene Builder ) Sketchboard OmniGraffle React Studio ) Paint Online ) PaperDraw Paintbrush ProtoShare Protostrap PaintCode Notability Precursor ) Prott Principle Overflow rapidui.io ProtoPie Sketch Origami Studio Proto.io Notism Pidoco Sketch - Draw & Paint Pencil Savah Prott POP by Marvel ProtoShare ( Protostrap ( Prott ( Rapid UI ( React Studio ( Savah ( Scene Builder ( Sketch 3 ( Sketch ( SketchBoard ( Paintbrush ( Paintcode ( PaperDraw ( Pencil ( Pencil Madness ( Pidoco ( POP ( Power Mockup ( Precursor ( Principle ( Prott App ( Proto.io ( ProtoPie ( Tools Notability ( Notism ( OmniGraffle ( One Motion ( Origami ( Overflow ( 172 ui prototyping tools review Code Edit Asset support Images Export Me-Fi Hi-Fi Lo-Fi Overview of commercial prototyping tools. a.1: Table Design Interaction Storyboard Design Interaction Design Interaction Freehand Upload sketching Images ) ) ) ) ) ) ) ) ) ) WireframeSketcher ) ) Matisse ) Tayasui Sketches ) Visual Paradigm ) ) Wireframe.cc ) ) 2019 Visual Studio 2019 ) ) ) ) Autodesk SketchBook SmartDraw ) ) UXToolbox Whimsical Wireflow ) ) Webflow UX-App UMLet Xcode UXPin Zeplin STUDIO Snapp yULM Wires Squid Symu.co Vectr Weld Microsoft Visio Webflow ( Weld ( Whimsical ( Wire Flow ( Wireframe Sketcher ( Wireframe.cc ( Wires ( XCode ( yULM ( Zeplin ( Squid ( Studio ( Swing GUI builder ( Symu ( Tayasui Sketches ( UMLet ( UX-App ( UXPin ( UXToolbox ( Vectr ( Visio ( Visual Paradigm ( Visual Studio ( Tools Sketchbook ( SmartDraw ( Snapp ( b WORKLOADANALYSIS:TASKCATEGORIES

For the workload analysis, participants from both experimental and control group were randomly assigned one of the following task categories to prototype an Android smartphone application:

1. Shopping: Participants were asked to prototype a shopping application with the following features: • Show product catalog • Show product details • Buying a product

2. Booking: Participants were asked to prototype a booking application for movie tickets with the following features: • Show previously booked tickets • Search for new program • Book a ticket

3. Food: Participants were asked to prototype a recipe book application with the follow- ing features: • Show a collection of dishes • Show the recipe of a particular dish • Create a new recipe

4. Music: Participants were asked to prototype a music application with the following features: • Play a song • Search for a specific song • Show similar music

173 174 workload analysis: task categories

5. News: Participants were asked to prototype a news application with the following features: • Show latest news • Arrange the news items based on their significance • Show the details of a specific news item

6. Photos: Participants were asked to prototype a photo gallery application with the following features: • Show all photos • Show different categories of photos • Edit a photo

7. Social Media: Participants were asked to prototype a social media application with the following features: • Show timeline with latest activities • Show the current user’s profile • Open a new message

8. Weather: Participants were asked to prototype a weather application with the follow- ing features: • Show current weather • Show weather details throughout the day • Show the weather changes for the whole week c TRADITIONALPROTOTYPINGWORKLOAD ANALYSIS

The following sections contain the data collected during the workload analysis of traditional prototyping of Android smartphone application. c.1 lo-fi workload analysis

Physical Demand Mental Demand Temporal Demand Performance Effort Frustration Level Workload

ID Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Rating Rating Rating Rating Rating Rating (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100)

S3C01 65 1 65 80 3 240 70 3 210 15 3 45 75 5 375 30 0 0 62.33 S3C02 50 0 0 85 5 425 50 2 100 0 3 0 75 4 300 75 1 75 60 S3C03 30 0 0 70 3 210 70 5 350 0 1 0 75 4 300 70 2 140 66.67 S3C04 20 0 0 95 5 475 50 3 150 15 4 60 65 2 130 5 1 5 54.67 S3C05 20 3 60 75 2 150 35 5 175 50 0 0 50 4 200 75 1 75 44 S3C06 30 4 120 30 1 30 70 2 140 20 2 40 30 4 120 25 2 50 33.33 S3C07 45 2 90 20 3 60 60 4 240 10 4 40 35 2 70 5 0 0 33.33 S3C08 20 0 0 55 5 275 60 2 120 0 4 0 50 3 150 15 1 15 37.33 S3C09 45 3 135 65 4 260 40 0 0 45 5 225 60 2 120 15 1 15 50.33 S3C10 75 3 225 60 4 240 50 0 0 35 3 105 65 4 260 55 1 55 59

Control (Conventional) S3C11 60 0 0 95 4 380 80 2 160 40 4 160 80 4 320 55 1 55 71.67 S3C12 50 0 0 85 4 340 75 3 225 5 2 10 95 5 475 60 1 60 74 S3C13 5 0 0 20 3 60 10 5 50 25 4 100 25 2 50 10 1 10 18 S3C14 25 0 0 70 5 350 35 2 70 30 3 90 70 4 280 60 1 60 56.67 S3C15 70 4 280 60 2 120 30 1 30 65 3 195 55 5 275 50 0 0 60 S3C16 65 2 130 80 5 400 50 0 0 30 4 120 50 4 200 60 0 0 56.67

Mean 42.19 1.38 69.06 65.31 3.62 250.94 52.19 2.44 126.25 24.06 3.06 74.38 59.69 3.62 226.56 41.56 0.88 38.44 52.37 SD 21.29 1.59 88.75 23.98 1.26 138.75 18.97 1.71 100.27 19.51 1.29 72.2 19.19 1.09 116.79 25.8 0.62 39.61 15.39

S3E01 0 1 0 10 5 50 0 2 0 30 3 90 0 4 0 60 0 0 9.33 S3E02 20 1 20 20 3 60 0 0 0 0 4 0 0 5 0 0 2 0 5.33 S3E03 0 1 0 0 2 0 0 2 0 0 4 0 0 3 0 0 3 0 0 S3E04 10 0 0 10 2 20 10 1 10 0 5 0 20 4 80 10 3 30 9.33 S3E05 40 1 40 20 0 0 10 4 40 60 4 240 10 4 40 20 2 40 26.67 S3E06 0 3 0 0 5 0 0 3 0 10 2 20 0 2 0 0 0 0 1.33 S3E07 0 0 0 20 4 80 10 1 10 20 4 80 20 4 80 0 2 0 16.67 S3E08 0 0 0 10 3 30 20 1 20 0 4 0 10 2 20 0 5 0 4.67 S3E09 20 3 60 10 1 10 10 3 30 70 5 350 30 3 90 10 0 0 36 S3E10 60 1 60 40 1 40 10 4 40 10 1 10 60 3 180 0 5 0 22 Experimental (Eve) S3E11 20 1 20 30 5 150 10 2 20 10 4 40 10 3 30 0 0 0 17.33 S3E12 10 0 0 10 5 50 0 3 0 0 1 0 10 3 30 0 3 0 5.33 S3E13 10 4 40 100 3 300 30 2 60 30 3 90 90 3 270 10 0 0 50.67 S3E14 10 0 0 10 4 40 10 2 20 30 3 90 10 5 50 10 1 10 14 S3E15 10 2 20 30 3 90 30 2 60 20 5 100 20 3 60 10 0 0 22 S3E16 20 3 60 20 2 40 10 1 10 20 5 100 20 4 80 0 0 0 19.33

Mean 14.38 1.31 20.0 21.25 3.0 60.0 10.0 2.06 20.0 19.38 3.56 75.62 19.38 3.44 63.12 8.12 1.62 5.0 16.25 SD 16.32 1.3 24.22 23.63 1.59 74.92 9.66 1.12 20.66 21.12 1.31 97.09 24.07 0.89 72.45 15.15 1.78 12.11 13.47

Table c.1: Data collected using NASA-TLX during Lo-Fi prototyping.

175 176 traditional prototyping workload analysis

c.2 me-fi workload analysis

Physical Demand Mental Demand Temporal Demand Performance Effort Frustration Level Workload

ID Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Rating Rating Rating Rating Rating Rating (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100)

S3C01 50 3 150 10 2 20 25 4 100 25 3 75 75 2 150 40 1 40 35.67 S3C02 70 4 280 70 4 280 50 0 0 50 1 50 70 4 280 70 2 140 68.67 S3C03 85 2 170 60 0 0 100 5 500 10 3 30 65 4 260 25 1 25 65.67 S3C04 75 4 300 50 1 50 50 0 0 0 2 0 70 5 350 70 3 210 60.67 S3C05 50 3 150 20 2 40 50 2 100 30 5 150 55 3 165 20 0 0 40.33 S3C06 60 1 60 85 4 340 70 5 350 35 2 70 80 1 80 75 2 150 70 S3C07 60 0 0 70 2 140 75 2 150 10 4 40 75 2 150 70 5 350 55.33 S3C08 50 1 50 80 2 160 100 3 300 30 5 150 70 4 280 25 0 0 62.67 S3C09 75 1 75 75 4 300 75 1 75 25 4 100 70 4 280 50 1 50 58.67 S3C10 40 1 40 65 3 195 65 0 0 40 3 120 40 4 160 45 4 180 46.33 S3C11 60 2 120 60 4 240 40 1 40 0 5 0 55 2 110 35 1 35 36.33 Control (Conventional) S3C12 65 2 130 70 3 210 30 4 120 5 1 5 80 5 400 50 0 0 57.67 S3C13 75 2 150 60 4 240 65 2 130 30 0 0 60 4 240 50 3 150 60.67 S3C14 20 0 0 20 2 40 85 3 255 90 3 270 95 4 380 95 3 285 82 S3C15 80 2 160 75 4 300 55 0 0 25 3 75 75 5 375 65 1 65 65 S3C16 10 0 0 90 5 450 65 2 130 5 4 20 75 3 225 10 1 10 55.67

Mean 57.81 1.75 114.69 60.0 2.88 187.81 62.5 2.12 140.62 25.62 3.0 72.19 69.38 3.5 242.81 49.69 1.75 105.62 57.58 SD 20.89 1.29 91.13 23.8 1.36 131.85 21.91 1.75 143.54 22.72 1.51 73.26 12.63 1.21 100.51 23.34 1.48 108.5 12.66

S3E01 0 0 0 70 2 140 100 1 100 80 4 320 80 4 320 90 4 360 82.67 S3E02 30 1 30 40 5 200 20 0 0 30 2 60 50 4 200 50 3 150 42.67 S3E03 60 3 180 20 1 20 40 0 0 30 5 150 30 4 120 40 2 80 36.67 S3E04 70 0 0 30 1 30 70 2 140 80 5 400 90 4 360 80 3 240 78 S3E05 70 4 280 20 0 0 20 1 20 40 5 200 30 2 60 60 3 180 49.33 S3E06 10 1 10 10 3 30 0 1 0 10 3 30 10 5 50 10 2 20 9.33 S3E07 20 1 20 20 5 100 0 0 0 30 2 60 40 3 120 50 4 200 33.33 S3E08 0 0 0 30 4 120 0 1 0 20 2 40 20 4 80 20 4 80 21.33 S3E09 10 0 0 10 2 20 10 1 10 20 5 100 10 3 30 20 4 80 16 S3E10 20 5 100 10 2 20 20 2 40 70 4 280 10 2 20 10 0 0 30.67

Experimental (Eve) S3E11 90 5 450 50 1 50 10 1 10 10 2 20 80 3 240 60 3 180 63.33 S3E12 10 0 0 30 5 150 10 3 30 20 2 40 50 3 150 10 2 20 26 S3E13 20 1 20 10 2 20 20 1 20 0 2 0 20 4 80 10 5 50 12.67 S3E14 10 2 20 80 0 0 20 3 60 90 1 90 80 4 320 90 5 450 62.67 S3E15 60 2 120 60 3 180 20 0 0 60 1 60 30 5 150 30 4 120 42 S3E16 90 3 270 70 4 280 10 0 0 80 1 80 80 2 160 70 5 350 76

Mean 35.62 1.75 93.75 35.0 2.5 85.0 23.12 1.06 26.88 41.88 2.88 120.62 44.38 3.5 153.75 43.75 3.31 160.0 42.67 SD 32.04 1.77 134.7 23.94 1.71 84.7 26.76 1.0 40.94 29.94 1.54 118.4 28.98 0.97 107.51 29.18 1.35 133.37 23.88

Table c.2: Data collected using NASA-TLX during Me-Fi prototyping. c.3 hi-fi workload analysis 177 c.3 hi-fi workload analysis

Physical Demand Mental Demand Temporal Demand Performance Effort Frustration Level Workload

ID Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Scale Weight Adjusted Rating Rating Rating Rating Rating Rating (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100) (5) (500) (100)

S3C01 90 3 270 80 2 160 70 0 0 20 4 80 80 5 400 10 1 10 61.33 S3C02 80 2 160 70 0 0 50 3 150 20 2 40 90 4 360 60 4 240 63.33 S3C03 10 1 10 70 3 210 60 1 60 20 5 100 60 4 240 40 1 40 44 S3C04 30 1 30 90 4 360 80 2 160 20 1 20 70 4 280 60 3 180 68.67 S3C05 80 4 320 70 2 140 50 0 0 20 2 40 80 5 400 60 2 120 68 S3C06 20 1 20 80 3 240 0 3 0 0 4 0 40 4 160 30 0 0 28 S3C07 0 0 0 70 3 210 100 5 500 50 1 50 90 4 360 50 2 100 81.33 S3C08 70 0 0 70 3 210 70 4 280 40 4 160 80 3 240 90 1 90 65.33 S3C09 100 2 200 100 3 300 100 1 100 0 1 0 100 4 400 100 4 400 93.33 S3C10 70 3 210 80 4 320 20 1 20 10 1 10 80 5 400 60 1 60 68 S3C11 30 0 0 80 3 240 100 5 500 30 1 30 100 4 400 80 2 160 88.67 Control (Conventional) S3C12 40 2 80 70 4 280 50 0 0 40 5 200 50 2 100 40 2 80 49.33 S3C13 10 1 10 100 5 500 60 4 240 40 2 80 60 3 180 10 0 0 67.33 S3C14 20 1 20 100 3 300 90 4 360 70 3 210 60 4 240 40 0 0 75.33 S3C15 40 0 0 90 4 360 80 3 240 20 4 80 80 3 240 40 1 40 64 S3C16 10 0 0 100 2 200 100 5 500 0 2 0 100 4 400 100 2 200 86.67

Mean 43.75 1.31 83.12 82.5 3.0 251.88 67.5 2.56 194.38 25.0 2.62 68.75 76.25 3.88 300.0 54.38 1.62 107.5 67.04 SD 32.84 1.25 110.38 12.38 1.15 111.85 29.33 1.86 188.11 19.32 1.5 68.5 18.21 0.81 101.98 27.8 1.26 108.47 16.83

S3E01 10 0 0 20 2 40 40 1 40 50 4 200 60 5 300 40 3 120 46.67 S3E02 20 0 0 40 4 160 30 1 30 30 3 90 60 5 300 50 2 100 45.33 S3E03 60 1 60 70 2 140 60 0 0 80 4 320 70 4 280 70 4 280 72 S3E04 70 0 0 30 1 30 80 2 160 80 4 320 90 5 450 90 3 270 82 S3E05 60 2 120 70 1 70 30 0 0 70 4 280 20 3 60 70 5 350 58.67 S3E06 0 1 0 20 4 80 0 0 0 10 3 30 20 3 60 20 4 80 16.67 S3E07 30 1 30 10 4 40 0 0 0 50 2 100 30 3 90 60 5 300 37.33 S3E08 0 0 0 30 4 120 10 1 10 20 4 80 40 4 160 20 2 40 27.33 S3E09 20 0 0 20 4 80 10 1 10 10 5 50 20 2 40 10 3 30 14 S3E10 20 3 60 10 2 20 20 2 40 70 5 350 10 3 30 10 0 0 33.33

Experimental (Eve) S3E11 40 4 160 20 4 80 50 1 50 20 3 60 60 2 120 30 1 30 33.33 S3E12 0 0 0 20 4 80 10 1 10 10 5 50 20 3 60 0 2 0 13.33 S3E13 20 1 20 30 2 60 10 0 0 20 5 100 20 3 60 30 4 120 24 S3E14 20 2 40 90 0 0 20 3 60 80 1 80 20 4 80 80 5 400 44 S3E15 40 3 120 20 4 80 0 0 0 20 4 80 10 2 20 30 2 60 24 S3E16 70 3 210 70 5 350 80 0 0 70 1 70 70 2 140 70 4 280 70

Mean 30.0 1.31 51.25 35.62 2.94 89.38 28.12 0.81 25.62 43.12 3.56 141.25 38.75 3.31 140.62 42.5 3.06 153.75 40.12 SD 24.22 1.35 66.62 25.02 1.48 81.36 26.89 0.91 41.31 28.22 1.31 112.0 25.53 1.08 125.4 27.93 1.48 135.69 21.33

Table c.3: Data collected using NASA-TLX during Hi-Fi prototyping.

BIBLIOGRAPHY

Adobe Comp. https://www.adobe.com/products/comp.html. (Accessed on 27-08-2020).

Adobe Draw. https://www.adobe.com/products/draw.html. (Accessed on 17-03-2020).

Adobe Illustrator. https://www.adobe.com/de/products/illustrator.html. (Accessed on 17-03-2020).

Adobe InDesign. https://www.adobe.com/ch_de/products/indesign.html. (Accessed on 29-08-2020).

Adobe Muse CC. http://muse.adobe.com/. (Accessed on 18-03-2020).

Adobe Photoshop. https://www.adobe.com/de/products/photoshop.html. (Accessed on 17-03-2020).

Adobe Sketch. https://www.adobe.com/products/sketch.html. (Accessed on 17-03-2020).

Adobe XD. https://www.adobe.com/products/xd.html. (Accessed on 17-03-2020).

Airbnb. https://airbnb.design/the-way-we-build/. (Accessed on 11-10-2020).

Alexander, Christopher (1977). A pattern language: towns, buildings, construction. Oxford university press.

Ambler, Scott W (2004). The object primer: Agile model-driven development with UML 2.0. Cambridge University Press.

Android Studio. https://developer.android.com/studio. (Accessed on 18-03-2020).

Antetype. https://www.antetype.com/. (Accessed on 18-03-2020).

Appery.io. https://appery.io/. (Accessed on 18-03-2020).

Apple (2018). Apple Human Interface Guidelines. url: https : / / developer . apple . com / design/human-interface-guidelines/ (visited on 07/15/2018).

ArtFlow. http://artflowstudio.com/. (Accessed on 17-03-2020).

179 180 bibliography

Atlassian. https://atlassian.design/. (Accessed on 11-10-2020).

Atlassian (2002). Jira. https://www.atlassian.com/software/jira. (Accessed on 12-03- 2020).

Atomic.io. https://atomic.io/. (Accessed on 18-03-2020).

Austin, Robert Daniel and Lee Devin (2003). Artful making: What managers need to know about how artists work. FT Press.

Autodesk SketchBook. https://sketchbook.com/. (Accessed on 17-03-2020).

Avocode App. https://avocode.com/. (Accessed on 29-08-2020).

Axure RP. https://www.axure.com/. (Accessed on 17-03-2020).

Bailey, Brian P. and Joseph A. Konstan (2003). “Are Informal Tools Better? Comparing DEMAIS, Pencil and Paper, and Authorware for Early Multimedia Design.” In: CHI ’03, pp. 313–320. doi: 10.1145/642611.642666. url: https://doi.org/10.1145/642611. 642666.

Ballard, Barbara (2007). “Designing the Mobile User Experience.” In: Hoboken, NJ, USA: John Wiley Sons, Inc. isbn: 0470033614.

Balsamiq. https://balsamiq.com/. (Accessed on 13-07-2018).

Bamboo Paper. https://www.wacom.com/en-us/products/apps-services/bamboo-paper. (Accessed on 17-03-2020).

BBC (2015). http : / / www . bbc . co . uk / gel / guidelines / category / design - patterns. (Accessed on 17-03-2020).

Beltramelli, Tony (2017). “Teaching Machines to Understand User Interfaces.” In: url: https://uizard.io/research/.

Beltramelli, Tony (2018). “Pix2code: Generating Code from a Graphical User Interface Screenshot.” In: Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. Vol. abs/1705.07962. EICS ’18. New York, NY, USA: Association for Computing Machinery. isbn: 9781450358972. doi: 10.1145/3220134.3220135. arXiv: 1705.07962. url: https://doi.org/10.1145/3220134.3220135. bibliography 181

Benjamin, Wilkins (2017). Sketching Interfaces. https : / / airbnb . design / sketching - interfaces/. (Accessed on 29-08-2020).

Bernhaupt, Regina, Marco Winkler, and Florence Pontico (2009). “User interface patterns: A field study evaluation.” In: IADIS international conference-interfaces and human computer interaction 2009.

Beryl, Plimmer and Apperley Mark (2003). “Soware to sketch interface designs.” In: Human-Computer Interaction - INTERACT’03. IOS Press, pp. 73–80. url: https://pdfs. semanticscholar.org/8211/6465daa1d31dd286657097339d3505459f5c.pdf.

Blender.org (1995). https://www.blender.org/. (Accessed on 17-03-2020).

Boords. https://boords.com/. (Accessed on 17-03-2020).

Borchers, Jan (2002). “Teaching HCI design patterns: Experience from two university courses.” In: Patterns in practice: A workshop for UI designers (at CHI 2002 international conference on human factors of computing systems).

Borchers, Jan O (2000). “A pattern approach to interaction design.” In: Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques. ACM, pp. 369–378.

Braby, CD, D Harris, and HC Muir (1993). “A psychophysiological approach to the assessment of work underload.” In: Ergonomics 36.9, pp. 1035–1042.

Brooke, John (June 1996). “SUS: A quick and dirty usability scale.” In: Usability evaluation in industry 189.194. ISBN: 9780748404605, pp. 4–7. url: https://www.crcpress.com/ product/isbn/9780748404605.

Brown, I D (1962). “Measuring the ’spare mental capacity’ op car drivers by a subsidiary auditory task.” In: Ergonomics 5.1, pp. 247–250.

Budescu, David V, Rami Zwick, and Amnon Rapoport (1986). “A comparison of the eigen- value method and the geometric mean procedure for ratio scaling.” In: Applied psychologi- cal measurement 10.1, pp. 69–78.

Buffer. http://bufferapp.github.io/buffer-style/. (Accessed on 11-10-2020).

Cacoo. https://cacoo.com/home. (Accessed on 18-03-2020). 182 bibliography

Caetano, Anabela, Neri Goulart, Manuel Fonseca, and Joaquim Jorge (2002). “JavaSketchIt: Issues in Sketching the Look of User Interfaces.” In: url: http://www.aaai.org/Papers/ Symposia/Spring/2002/SS-02-08/SS02-08-002.pdf.

Caldwell, Ben, Michael Cooper, Loretta Guarino Reid, and Gregg Vanderheiden (Dec. 2008). “Web content accessibility guidelines (WCAG) 2.0.” In: WWW Consortium (W3C). url: https://www.w3.org/TR/WCAG20/.

Caldwell, Ben, Loretta Guarino Reid, Gregg Vanderheiden, Wendy Chisholm, John Slatin, and Jason White (June 2018). “Web content accessibility guidelines (WCAG) 2.1.” In: WWW Consortium (W3C). url: https://www.w3.org/TR/WCAG21/.

Camburn, Bradley, Vimal Viswanathan, Julie Linsey, David Anderson, Daniel Jensen, Richard Crawford, Kevin Otto, and Kristin Wood (2017). “Design prototyping methods: state of the art in strategies, techniques, and guidelines.” In: Design Science 3.

CanvasFlip App. https://www.behance.net/canvasflip. (Accessed on 18-03-2020).

Casner, Stephen M (2009). “Perceived vs. measured effects of advanced cockpit systems on pilot workload and error: Are pilots’ beliefs misaligned with reality?” In: Applied Ergonomics 40.3, pp. 448–456.

Chi Tran, Linh (2019). “UI Design Guidelines Library Extension for Kiwi.” MA thesis. RWTH Aachen University.

Chisholm, Wendy, Gregg Vanderheiden, and Ian Jacobs (2001). “Web content accessibility guidelines 1.0.” In: Interactions 8.4, pp. 35–54.

Chisnell, Dana and Janice Redish (2005). Designing web sites for older adults: Expert review of usability for older adults at 50 web sites. Vol. 1. AARP San Francisco.

Chung, Ronald, Petrut Mirica, and Beryl Plimmer (2005). “InkKit: A Generic Design Tool for the Tablet PC.” In: CHINZ ’05, pp. 29–30. doi: 10 . 1145 / 1073943 . 1073950. url: https://doi.org/10.1145/1073943.1073950.

Clarity. https://clarity.design/. (Accessed on 11-10-2020).

Codiqa. http://documentation.bold-themes.com/codiqa/. (Accessed on 18-03-2020).

Coggle. https://coggle.it/. (Accessed on 29-08-2020). bibliography 183

Color Hunt. https://colorhunt.co/. (Accessed on 18-04-2020).

Concept.ly. https://appadvice.com/app/concept-ly-prototype-tool-for-interactive- mockup-wireframe/670223313. (Accessed on 18-03-2020).

Conceptdraw.com. https://www.conceptdraw.com/. (Accessed on 18-03-2020).

Coolors.co. https://coolors.co/. (Accessed on 18-04-2020).

Cooper, Alan (1999). The Inmates Are Running the Asylum. Indianapolis, IN, USA: Macmillan Publishing Co., Inc. isbn: 0672316498.

Cooper, Alan (2003). About face 2.0: the essentials of interaction design. Indianapolis, IN: Wiley. isbn: 9780764526411.

Cooper, Alan, Robert Reimann, and David Cronin (2007). About face 3: the essentials of interaction design. John Wiley & Sons.

Coplien, James O, Douglas C Schmidt, and John M Vlissides (1995). Pattern languages of program design. Vol. 58. Addison-Wesley Reading, MA.

Coram, Todd and Jim Lee (1996). Experiences–A pattern language for user interface design. http://www.maplefish.com/todd/papers/Experiences.html. (Accessed on 12-10-2020).

Coyette, Adrien, Stéphane Faulkner, Manuel Kolp, Quentin Limbourg, and Jean Vander- donckt (2004a). “SketchiXML: Towards a Multi-Agent Design Tool for Sketching User Interfaces Based on USIXML.” In: pp. 75–82.

Coyette, Adrien, Stéphane Faulkner, Manuel Kolp, Quentin Limbourg, and Jean Vanderdon- ckt (2004b). “SketchiXML: towards a multi-agent design tool for sketching user interfaces based on USIXML.” In: Proceedings of the 3rd annual conference on Task models and di- agrams. TAMODIA ’04. New York, NY, USA: ACM, pp. 75–82. isbn: 1-59593-000-0. doi: 10.1145/1045446.1045461. url: http://doi.acm.org/10.1145/1045446.1045461.

Coyette, Adrien and Jean Vanderdonckt (2005). “A sketching tool for designing anyuser, anyplatform, anywhere user interfaces.” In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 3585 LNCS, pp. 550–564. issn: 03029743. doi: 10.1007/11555261_45.

Cronholm, Stefan (2009). “The usability of usability guidelines.” In: Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group 184 bibliography

on Design: Open 24/7 - OZCHI ’09. 2003. New York, New York, USA: ACM Press, p. 233. isbn: 9781605588544. doi: 10.1145/1738826.1738864. url: http://portal.acm.org/ citation.cfm?doid=1738826.1738864.

Crumlish, Christian and Erin Malone (2009). Designing social interfaces: Principles, patterns, and practices for improving the user experience. O’Reilly Media, Inc.

Deaton, Mary (Sept. 2003). “The Elements of User Experience: User-Centered Design for the Web.” In: Interactions 10.5, pp. 49–51. issn: 1072-5520. doi: 10.1145/889692.889709. url: https://doi.org/10.1145/889692.889709.

DesignerVista. http://www.designervista.com/. (Accessed on 18-03-2020).

Do Ink. http://www.doink.com/. (Accessed on 17-03-2020).

Dow, Steven, Blair MacIntyre, Jaemin Lee, Christopher Oezbek, Jay David Bolter, and Maribeth Gandy (Oct. 2005). “Wizard of Oz Support Throughout an Iterative Design Process.” In: IEEE Pervasive Computing 4.4, pp. 18–26. issn: 1536-1268. doi: 10.1109/MPRV. 2005.93. url: http://dx.doi.org/10.1109/MPRV.2005.93.

Dow, Steven P., Kate Heddleston, and Scott R. Klemmer (2009). “The Efficacy of Prototyping under Time Constraints.” In: Proceedings of the Seventh ACM Conference on Creativity and Cognition. C&C ’09. Berkeley, California, USA: Association for Computing Machinery, pp. 165–174. isbn: 9781605588650. doi: 10.1145/1640233.1640260. url: https://doi. org/10.1145/1640233.1640260.

Draw Island. https://drawisland.com/. (Accessed on 17-03-2020).

Duyne, Douglas K Van, James Landay, and Jason I Hong (2002). The Design of Sites: Patterns, Principles, and Processes for Crafting a Customer-Centered Web Experience. Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc. isbn: 020172149X.

Engelberg, Daniel and Ahmed Seffah (2002). “A Framework for Rapid Mid-Fidelity Prototyp- ing of Web Sites.” In: Usability: Gaining a Competitive Edge. Ed. by Judy Hammond, Tom Gross, and Janet Wesson. Boston, MA: Springer US, pp. 203–215. isbn: 978-0-387-35610-5. doi: 10.1007/978-0-387-35610-5_14. url: https://doi.org/10.1007/978-0-387- 35610-5_14.

Erickson, Thomas (2000). “Lingua Francas for Design: Sacred Places and Pattern Lan- guages.” In: Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, bibliography 185

Practices, Methods, and Techniques. DIS ’00. New York City, New York, USA: Association for Computing Machinery, pp. 357–368. isbn: 1581132190. doi: 10.1145/347642.347794. url: https://doi.org/10.1145/347642.347794.

Evernote. https://evernote.com/. (Accessed on 17-03-2020).

Fairclough, Stephen H, MC Ashby, and Andrew M Parkes (1993). “In-vehicle displays, visual workload and usability evaluation.” In: Vision in vehicles 4, pp. 245–254.

Fast Mockup. https://fast-mockup.com/. (Accessed on 18-03-2020).

Figma. https://www.figma.com. (Accessed on 17-03-2020).

Finlay, Janet, Elizabeth Allgar, Andy Dearden, and Barbara McManus (2002). “Using pattern languages in participatory design.” In: People and Computers XVI - Memorable Yet Invisible. London: Springer, pp. 159–174. isbn: 978-1-4471-0105-5.

Fisk, Arthur D, William L Derrick, and Walter Schneider (1983). “The assessment of work- load: Dual task methodology.” In: Proceedings of the Human Factors Society Annual Meeting. Vol. 27. 3. SAGE Publications Sage CA: Los Angeles, CA, pp. 229–233.

FitBit. https : / / dev . fitbit . com / build / guides / user - interface / svg - components/. (Accessed on 11-10-2020).

flairbuilder. http://flairbuilder.com/. (Accessed on 18-03-2020).

Flat Icon. https://www.flaticon.com/. (Accessed on 18-04-2020).

Flinto. https://www.flinto.com/. (Accessed on 17-03-2020).

FlowMapp. https://www.flowmapp.com/. (Accessed on 29-08-2020).

Floyd, Ingbert R, Michael B Twidale, and M Cameron Jones (2008). “Resolving incom- mensurable debates: a preliminary identification of persona kinds, attributes, and characteristics.” In: Artifact: Journal of Design Practice 2.1, pp. 12–26. doi: 10 . 1080 / 17493460802276836. url: https://doi.org/10.1080/17493460802276836.

FluidUI. https://www.fluidui.com. (Accessed on 18-03-2020).

Fonseca, Manuel J, Cèsar Pimentel, and Joaquim A Jorge (2002). “CALI: An Online Scribble Recognizer for Calligraphic Interfaces.” In: Proceedings of the 2002 AAAI Spring Sympo- 186 bibliography

sium on Sketch Understanding, pp. 51–58. url: http://www.inesc- id.pt/ficheiros/ publicacoes/747.pdf.

ForeUI. http://www.foreui.com/. (Accessed on 18-03-2020).

forms.app. https://forms.app/en. (Accessed on 18-03-2020).

Frame Box. http://framebox.org/. (Accessed on 18-03-2020).

Framer. https://www.framer.com/. (Accessed on 12-03-2020).

FreeImages.com. https://www.freeimages.com/. (Accessed on 18-04-2020).

Froont. https://froont.com/. (Accessed on 18-03-2020).

Furedy, John J (1987). “Beyond heart rate in the cardiac psychophysiological assessment of mental effort: The T-wave amplitude component of the electrocardiogram.” In: Human Factors 29.2, pp. 183–194.

Gaffar, A, D Sinnig, H Javahery, and A Seffah (2003). “MOUDIL: A comprehensive framework for disseminating and sharing HCI patterns.” In: Position Paper in ACM CHI 2003 Workshop: Perspectives on HCI Patterns: Concepts and Tools, Ft. Lauderdale, Florida.

Gamma, Erich, John Vlissides, Richard Helm, and Ralph Johnson (1995). Design patterns: elements of reusable object-oriented software. Pearson Education India.

UI-Garage (2016). http://uigarage.net/. (Accessed on 13-10-2018).

Gawron, Valerie Jane (2019). Human Performance, Workload, and Situational Awareness Mea- sures Handbook, -2-Volume Set. CRC Press.

GIMP. https://www.gimp.org/. (Accessed on 17-03-2020).

GitHub (2008). https://github.com/. (Accessed on 12-03-2020).

Gliffy. https://www.gliffy.com/. (Accessed on 29-08-2020).

Gong, Jun and Peter Tarasewich (2004). “Guidelines for handheld mobile device interface design.” In: Proceedings of DSI 2004 Annual Meeting, pp. 3751–3756. doi: 10.1.1.87.5230.

Goodwin, Kim (2009). Designing for the Digital Age: How to Create Human-Centered Products and Services. Wiley Publishing.

Google (2020). Material Design. url: https://material.io/design/. bibliography 187

Google Drawings. https : / / en . wikipedia . org / wiki / Google _ Drawings. (Accessed on 29-08-2020).

Gore, Brian (2010). “Measuring and Evaluating Workload: A Primer.” In: NASA Technical Memorandum July, p. 35. url: http://www.sti.nasa.gov.

Gravit Designer. https://www.designer.io/en/. (Accessed on 18-03-2020).

Gray, Wayne D, Bonnie E John, and Michael E Atwood (1993). “Project Ernestine: Validating a GOMS analysis for predicting and explaining real-world task performance.” In: Human- computer interaction 8.3, pp. 237–309.

Gremillion, Ben and Jerry Cao (2016). Web UI Design Patterns. https://www.uxpin.com/ studio/ebooks/web-ui-design-patterns-2016-volume-1/. (Accessed on 12-10-2020).

Grier, Rebecca A (2015). “How High Is High ? A Meta-Analysis of Nasa-Tlx Global Workload Scores.” In: 59th Annual Meeting of the Human Factors and Ergonomics Society, pp. 1727–1731. issn: 1541-9312. doi: 10.1177/1541931215591373.

Grudin, Jonathan and John Pruitt (2002). “Personas, participatory design and product development: An infrastructure for engagement.” In: Proceedings of Participation and Design Conference (PDC2002), Sweden. Vol. 2.

GUI Design Studio. https://www.carettasoftware.com/guidesignstudio/. (Accessed on 18-03-2020).

Ha, Seyong, Jiwan Park, and Joonhwan Lee (2014). “Increasing Interactivity of Paper Pro- totyping with Smart Pen.” In: Proceedings of HCI Korea. HCIK ’15, pp. 76–82. url: http: //dl.acm.org/citation.cfm?id=2729485.2729498.

Halpert, Benjamin J (2005). “Authentication interface evaluation and design for mobile devices.” In: Proceedings of the 2nd annual conference on Information security curriculum development - InfoSecCD ’05. New York, New York, USA: ACM Press, p. 112. isbn: 1595932615. doi: 10.1145/1107622.1107649. url: http://portal.acm.org/citation.cfm?doid= 1107622.1107649.

Hart, S. G. and S Staveland L. (1988). “Development of the NASA-TLX: Results of empirical and theoretical research.” In: Human mental workload. Ed. by Peter A Hancock and Najmedin Meshkati. Vol. 52. Advances in Psychology. North-Holland, pp. 139–183v. isbn: 0933957300. 188 bibliography

doi: http : / / dx . doi . org / 10 . 1016 / S0166 - 4115(08 ) 62386 - 9. url: http : / / www . sciencedirect.com/science/article/pii/S0166411508623869.

Hart, Sandra G (2006). “Nasa-Task Load Index (NASA-TLX); 20 Years Later.” In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting 50.9, pp. 904–908. issn: 1541- 9312. doi: 10.1177/154193120605000909. arXiv: 9605103 [cs]. url: http://journals. sagepub.com/doi/pdf/10.1177/154193120605000909.

Hendy, Keith C, Kevin M Hamilton, and Lois N Landry (1993). “Measuring subjective work- load: when is one scale better than many?” In: Human Factors 35.4, pp. 579–601.

Hering, H and G Coatleven (1996). “ERGO (Version 2) For instantaneous self assessment of workload in a real-time ATC simulation environment.” In: EEC Note 10, p. 96.

Hill, Susan G, Helene P Iavecchia, James C Byers, Alvah C Bittner Jr, Allen L Zaklade, and Richard E Christ (1992). “Comparison of four subjective workload rating scales.” In: Human factors 34.4, pp. 429–439.

Horton, Sarah and Whitney Quesenbery (2014). A Web for Everyone: Designing Accessible User Experiences. 1st. Brooklyn, New York: Rosenfeld Media.

HotGloo. https://www.hotgloo.com/. (Accessed on 18-03-2020).

IBM. https://www.ibm.com/design/language/. (Accessed on 11-10-2020).

IBM. IBM Design Accessibility Handbook. http://accessibility-handbook.mybluemix.net/ design/a11y-handbook/. (Accessed on 28-10-2020).

iconify on Iconfinder. https://www.iconfinder.com/iconify. (Accessed on 18-04-2020).

Icons8. Lunacy. https://icons8.com/lunacy. (Accessed on 18-03-2020).

Inc, Kony (2018). Kony Visualizer. https://www.kony.com/products/visualizer/. (Ac- cessed on 18-03-2020).

Infragistics. Indigo.Design. https://www.infragistics.com/products/indigo- design. (Accessed on 18-03-2020).

Inkscape. https://inkscape.org/. (Accessed on 18-03-2020).

InVision Studio. https://www.invisionapp.com. (Accessed on 18-03-2020).

InVision Studio. https://www.invisionapp.com/studio. (Accessed on 18-03-2020). bibliography 189

Ionic Creator. https://creator.ionic.io/. (Accessed on 18-03-2020). iOS. https : / / developer . apple . com / design / human - interface - guidelines / ios / overview/themes/. (Accessed on 11-10-2020). iPhone Mockup. http://iphonemockup.lkmc.ch/. (Accessed on 18-03-2020). iPlotz. https://iplotz.com/. (Accessed on 18-03-2020). irise. https://www.irise.com/. (Accessed on 18-03-2020).

Irons, Mark L. (2003). Patterns for personal websites. http : / / www . rdrop . com / ~half / Creations/Writings/Web.patterns/index.html. (Accessed on 13-10-2018).

ISO-9241-210 (2010). “9241-210: 2010. Ergonomics of human system interaction-Part 210: Human-centred design for interactive systems (formerly known as 13407).” In: Interna- tional Standardization Organization (ISO). Switzerland.

Jacobs, Ian, Jon Gunderson, and Eric Hansen (Dec. 2002). User Agent Accessibility Guidelines 1.0. (Accessed on 10-24-2018). url: http://www.w3.org/TR/2002/REC-UAAG10-20021217/.

Javahery, Homa and Ahmed Seffah (2002). “A model for usability pattern-oriented design.” In: Proceedings of the First International Workshop on Task Models and Diagrams for User Interface Design. TAMODIA ’02. INFOREC Publishing House Bucharest, pp. 104–110. isbn: 9738360013.

Javahery, Homa, Daniel Sinnig, Ahmed Seffah, Peter Forbrig, and T. Radhakrishnan (2006). “Pattern-Based UI Design: Adding Rigor with User and Context Variables.” In: Proceedings of the 5th International Conference on Task Models and Diagrams for Users Interface Design. TAMODIA’06. Hasselt, Belgium: Springer-Verlag, pp. 97–108. isbn: 9783540708155. jBart. http://www.artwaresoft.com/jbart.html#?page=jbart. (Accessed on 18-03-2020).

Johnson, Jeff A (2015). “Designing with the Mind in Mind: The Psychological Basis for UI Design Guidelines.” In: Extended Abstracts of the ACM CHI’15 Conference on Human Factors in Computing Systems 2, pp. 2501–2502. doi: 10.1145/2702613.2706667. url: http://dx.doi.org/10.1145/2702613.2706667.

Jorna, Peter GAM (1992). “Spectral analysis of heart rate and psychological state: A review of its validity as a workload index.” In: Biological psychology 34.2-3, pp. 237–257. 190 bibliography

Justinmind. https://www.justinmind.com/. (Accessed on 18-03-2020).

Kahn, P. H., B. T. Gill, A. L. Reichert, T. Kanda, H. Ishiguro, and J. H. Ruckert (2010). “Validating interaction patterns in HRI.” In: 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 183–184. doi: 10.1109/HRI.2010.5453205.

Karat, Clare-Marie, Christine Halverson, Daniel Horn, and John Karat (1999). “Patterns of Entry and Correction in Large Vocabulary Continuous Speech Recognition Systems.” In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’99. Pittsburgh, Pennsylvania, USA: Association for Computing Machinery, pp. 568–575. isbn: 0201485591. doi: 10.1145/302979.303160. url: https://doi.org/10.1145/302979. 303160.

Keynote - Apple. https://www.apple.com/keynote/. (Accessed on 17-03-2020).

Kipi, Nilda (2019). “Kiwi: A UI Design Pattern Library for Mobile Applications.” MA thesis. RWTH Aachen University.

Kite Compositor. https://kiteapp.co/. (Accessed on 18-03-2020).

Klomann, Marcel and Jan-Torsten Milde (2013). “Freiform: A SmartPen Based Approach for Creating Interactive Paper Prototypes for Collecting Data.” In: Human Interface and the Management of Information. Information and Interaction Design. Ed. by Sakae Yamamoto. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 316–321. isbn: 978-3-642-39209-2.

Koncept. https://konceptapp.com/. (Accessed on 18-03-2020).

Krug, Steve (2018). Don’t make me think!: Web & Mobile Usability. MITP-Verlags GmbH & Co. KG.

Kumar, Ashwin (2018). sketch-code. https://github.com/ashnkumar/sketch- code. (Ac- cessed on 18-03-2020).

Laakso, Sari A (2003). User interface design patterns. https://www.cs.helsinki.fi/u/ salaakso/patterns/. (Accessed on 18-03-2020).

Lab, Pattern (2018). https://patternlab.io/. (Accessed on 13-10-2018).

Lacey, Matt (2018). Usability Matters. Manning Publications Co., p. 392. isbn: 9781617293931. bibliography 191

Lancaster, A (Dec. 2004). “Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces.” In: IEEE Transactions on Professional Communication 47.4, pp. 335–336. issn: 0361-1434. doi: 10.1109/tpc.2004.837973.

Landay, James A. (1996). “SILK.” In: Conference companion on Human factors in computing systems common ground - CHI ’96. New York, New York, USA: ACM Press, pp. 398–399. isbn: 0897918320. doi: 10.1145/257089.257396. url: http://dl.acm.org/citation. cfm?id=257089.257396.

Landay, James A. and Brad A. Myers (2001). “Sketching interfaces: toward more human interface design.” In: Computer 34.3, pp. 56–64. issn: 00189162. doi: 10.1109/2.910894.

LayoutIt! https://www.layoutit.com/. (Accessed on 18-03-2020).

LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton (May 2015). “Deep learning.” In: Nature 521.7553, p. 436. issn: 0028-0836. doi: 10.1038/nature14539. url: https://doi.org/10. 1038/nature14539.

Li, Shu-Hui, Jia-Jyun Hsu, Chih-Ya Chang, Pin-Hsuan Chen, and Neng-Hao Yu (2017). “Xketch: A Sketch-Based Prototyping Tool to Accelerate Mobile App Design Process.” In: DIS ’17 Companion, pp. 301–304. doi: 10.1145/3064857.3079179. url: https://doi.org/10. 1145/3064857.3079179.

Lightning. https://www.lightningdesignsystem.com/. (Accessed on 11-10-2020).

Lim, Jiho (2019). Mobbin Design. https://mobbin.design/. (Accessed on 19-10-2019).

Lin, James and James A. Landay (2008). “Employing Patterns and Layers for Early-Stage Design and Prototyping of Cross-Device User Interfaces.” In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’08. Florence, Italy: Association for Computing Machinery, pp. 1313–1322. isbn: 9781605580111. doi: 10.1145/1357054. 1357260. url: https://doi.org/10.1145/1357054.1357260.

Lin, James, Mark W Newman, Jason I Hong, and James A Landay (2000). “DENIM: Finding a Tighter Fit Between Tools and Practice for Web Site Design.” In: Proceedings of the SIGCHI conference on Human factors in computing systems (CHI’00) 2.1, pp. 1–6. doi: http: //doi.acm.org/10.1145/332040.332486. 192 bibliography

Lin, James, Mark W Newman, Jason I Hong, and James A Landay (2001). “DENIM: an informal tool for early stage web site design.” In: Human Factors, pp. 205–206. doi: 10. 1145/634067.634190.

Lin, Tsung-Yi, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár (Aug. 2017). “Focal Loss for Dense Object Detection.” In: arXiv:1708.02002 [cs]. arXiv: 1708.02002. url: http: //arxiv.org/abs/1708.02002.

Liu, Wei, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg (Dec. 2015). “SSD: Single Shot MultiBox Detector.” In: Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 794, pp. 185–192. doi: 10.1007/978-3-319-46448-0_2.

Lonely Planet. https://rizzo.lonelyplanet.com/styleguide/design-elements/colours. (Accessed on 11-10-2020).

Long, Frank (2009). “Real or imaginary: The effectiveness of using personas in product design.” In: Proceedings of the Irish Ergonomics Society annual conference. Vol. 14. Dublin, pp. 1–10.

Luchini, Kathleen, Chris Quintana, and Elliot Soloway (2004). “Design guidelines for learner- centered handheld tools.” In: Proceedings of the 2004 conference on Human factors in com- puting systems - CHI ’04. Vol. 6. 1. New York, New York, USA: ACM Press, pp. 135–142. isbn: 1581137028. doi: 10.1145/985692.985710. url: http://portal.acm.org/citation.cfm? doid=985692.985710.

Lucidchart. https://www.lucidchart.com/pages/. (Accessed on 29-08-2020).

LukeW. https://www.lukew.com/. (Accessed on 11-10-2020).

Lumzy. http : / / www . prototypingtool . com / lumzy - a - quick - mockup - creation - and - prototyping-tool. (Accessed on 18-03-2020).

Lysaght, Robert J, Susan G Hill, AO Dick, Brian D Plamondon, and Paul M Linton (1989). Operator workload: Comprehensive review and evaluation of operator workload methodologies. Tech. rep. ANALYTICS INC WILLOW GROVE PA.

Ma, Jaio and Cindy LeRouge (2007). “Introducing User Profiles and Personas into Informa- tion Systems Development.” In: p. 237. url: http://aisel.aisnet.org/amcis2007/237. bibliography 193

Macaw. http://macaw.co/. (Accessed on 18-03-2020).

Mailchimp. https://ux.mailchimp.com/patterns/color. (Accessed on 11-10-2020).

Mann, H B and D R Whitney (1947). “On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other.” In: Ann. Math. Statist. 18.1, pp. 50–60. doi: 10. 1214/aoms/1177730491. url: https://doi.org/10.1214/aoms/1177730491.

Mao, Ji-Ye, Karel Vredenburg, Paul W. Smith, and Tom Carey (Mar. 2005). “The State of User-centered Design Practice.” In: Commun. ACM 48.3, pp. 105–109. issn: 0001-0782. doi: 10.1145/1047671.1047677. url: http://doi.acm.org/10.1145/1047671.1047677.

Marvel. https://marvelapp.com/. (Accessed on 18-03-2020).

Matisse. https://netbeans.org/features/java/swing.html. (Accessed on 18-03-2020).

Metalis, SA (1991). “Heart period as a useful index of pilot workload in commercial transport aircra.” In: The International Journal of Aviation Psychology 1.2, pp. 107–116.

Miaskiewicz, Tomasz and Kenneth A Kozar (2011). “Personas and user-centered design: How can personas benefit product design processes?” In: Design studies 32.5, pp. 417–430. doi: https://doi.org/10.1016/j.destud.2011.03.003. url: http://www.sciencedirect. com/science/article/pii/S0142694X11000275.

Microso (July 2018a). Design basics - UWP app developer. url: https://docs.microsoft. com/en-us/windows/uwp/design/basics/ (visited on 11/20/2018).

Microso (2018b). Sketch 2 Code. url: https://sketch2code.azurewebsites.net/.

Microso. Blend. https : / / docs . microsoft . com / en - us / visualstudio / xaml - tools / creating- a- ui- by- using- blend- for- visual- studio?view=vs- 2019. (Accessed on 03-18-2020).

Microso. Microsoft Inclusive Design. https://www.microsoft.com/design/inclusive/. (Accessed on 28-10-2020).

Microsoft Design. https://www.microsoft.com/design. (Accessed on 11-10-2020).

Microsoft PowerPoint. https://products.office.com/en/powerpoint. (Accessed on 17-03- 2020). 194 bibliography

Microsoft Visio. https://www.microsoft.com/en-us/microsoft-365/visio/flowchart- software. (Accessed on 29-08-2020).

MindMeister. https://www.mindmeister.com/. (Accessed on 29-08-2020).

Miniukovich, Aliaksei, Antonella De Angeli, Simone Sulpizio, and Paola Venuti (2017). “Design Guidelines for Web Readability.” In: Proceedings of the 2017 Conference on Designing Interactive Systems - DIS ’17. New York, New York, USA: ACM Press, pp. 285–296. isbn: 9781450349222. doi: 10.1145/3064663.3064711. url: http://dl.acm.org/citation. cfm?doid=3064663.3064711.

Miro. https://miro.com/apps/. (Accessed on 29-08-2020).

Mobiscroll (2018). UIPatterns.io. http://uipatterns.io/. (Accessed on 28-10-2018).

MockFlow. https://www.mockflow.com/. (Accessed on 18-03-2020).

Mockingbird. https://gomockingbird.com/home. (Accessed on 18-03-2020).

MockingBot. https://mockingbot.com/. (Accessed on 18-03-2020).

Mockplus. https://www.mockplus.com/. (Accessed on 17-03-2020).

Mockup Builder. http://mockupbuilder.com/. (Accessed on 18-03-2020).

Mockup Designer. https://fatiherikli.github.io/mockup-designer. (Accessed on 18-03- 2020).

Mockup Screens. http://www.mockupscreens.com/. (Accessed on 18-03-2020).

Mockup.io. https://mockup.io/about/. (Accessed on 18-03-2020).

MockupTiger Wireframes. https://www.mockuptiger.com/. (Accessed on 18-03-2020).

Moqups. https://moqups.com/. (Accessed on 13-07-2018).

Moroney, William F, David W Biers, and F Thomas Eggemeier (1995). “Some measurement and methodological considerations in the application of subjective workload measure- ment techniques.” In: The international journal of aviation psychology 5.1, pp. 87–106.

MS Paint. https://canvaspaint.org/{#}local:a36ddbd61caf4. (Accessed on 17-03-2020).

Muckler, Frederick A and Sally A Seven (1992). “Selecting performance measures: "Objective" versus "Subjective" measurement.” In: Human factors 34.4, pp. 441–455. bibliography 195

Mulder, LJM (1992). “Measurement and analysis methods of heart rate and respiration for use in applied environments.” In: Biological psychology 34.2-3, pp. 205–236.

Mulder, Steve, Ziv Yaar, and David Broschinsky (2007). The User is Always Right: A Practical Guide to Creating and Using Personas for the Web. Voices that matter. New Riders. isbn: 9780321434531.

Myoats. https://www.myoats.com/. (Accessed on 17-03-2020).

Narendra, Savinay, Sheelabhadra Dey, Josiah Coad, Seth Polsley, and Tracy Hammond (2019). “FreeStyle: A Sketch-Based Wireframing Tool.” In: Inspiring Students with Digital Ink. Springer, pp. 105–117. doi: 10.1007/978-3-030-17398-2_7. url: https://doi.org/ 10.1007/978-3-030-17398-2_7.

Naview. https://www.naviewapp.com/. (Accessed on 18-03-2020).

Neil, Theresa (2014). Mobile design pattern gallery: UI patterns for smartphone apps. O’Reilly Media, Inc.

Neonto Studio. https://neonto.com/nativestudio. (Accessed on 17-03-2020).

Newman, Mark W. and James A. Landay (2000). “Sitemaps, Storyboards, and Specifications: A Sketch of Web Site Design Practice.” In: Proceedings of the 3rd Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques. DIS ’00. New York City, New York, USA: Association for Computing Machinery, pp. 263–274. isbn: 1581132190. doi: 10.1145/347642.347758. url: https://doi.org/10.1145/347642.347758.

Newman, Mark W., James Lin, Jason I. Hong, and James A. Landay (Sept. 2003). “DENIM: An Informal Web Site Design Tool Inspired by Observations of Practice.” In: Human-Compututer Interaction 18.3, pp. 259–324. issn: 0737-0024. doi: 10 . 1207 / S15327051HCI1803_3. url: https://doi.org/10.1207/S15327051HCI1803_3.

Nicely-Done (2018). http://nicelydone.club/patterns. (Accessed on 13-10-2018).

Nielsen, Jakob (1995). 10 Usability Heuristics for User Interface Design. (Accessed on 17-08-2018). url: %5Curl%7Bhttps://www.nngroup.com/articles/ten-usability-heuristics/%7D.

Nielsen, Jakob and Raluca Budiu (2012). Mobile Usability. 1st ed. New Riders, p. 216. isbn: 978-0-321-88448-0.

Nielsen, Lene (2004). Engaging Personas and Narrative Scenarios. English. WorkingPaper. 196 bibliography

Nielsen Norman Group: UX Training, Consulting, & Research. https://www.nngroup.com/. (Accessed on 11-10-2020).

Nilsson, Erik G. (Dec. 2009). “Design patterns for user interface for mobile applications.” In: Advances in Engineering Software 40.12, pp. 1318–1328. issn: 09659978. doi: 10.1016/j. advengsoft.2009.01.017. url: http://dx.doi.org/10.1016/j.advengsoft.2009.01. 017.

NinjaMock. https://ninjamock.com/. (Accessed on 18-03-2020).

Norman, Donald A (1988). The psychology of everyday things. Basic books.

Norman, Donald A (1999). “Affordance, conventions, and design.” In: interactions 6.3, pp. 38– 43.

Norman, Donald A. (2002). The Design of Everyday Things. New York, NY, USA: Basic Books, Inc., p. 272.

Notability. https://www.gingerlabs.com/. (Accessed on 17-03-2020).

Notism. https://www.notism.io/. (Accessed on 18-03-2020).

OmniGraffle. https://www.omnigroup.com/omnigraffle. (Accessed on 18-03-2020).

Oracle. https://www.oracle.com/webfolder/ux/middleware/alta/index.html. (Accessed on 11-10-2020).

Origami Studio. https://origami.design/. (Accessed on 29-08-2020).

Outsystems (2017). Silk UI Patterns. https://silkui.outsystems.com/Patterns.aspx. (Accessed on 28-10-2018).

Overflow. https://overflow.io/. (Accessed on 29-08-2020).

Paint Online. http://www.onemotion.com/flash/sketch-paint/. (Accessed on 17-03-2020).

Paintbrush. https://paintbrush.sourceforge.io/. (Accessed on 17-03-2020).

PaintCode. https://www.paintcodeapp.com/. (Accessed on 17-03-2020).

Paletton. https://paletton.com/. (Accessed on 18-04-2020).

Pandian, Vinoth Pandian Sermuga (2019). “UI Element Detection from Freehand Lo-Fi Sketch Using Deep Neural Networks.” MA thesis. RWTH Aachen University. bibliography 197

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, Christian Beecks, and Matthias Jarke (2020). “MetaMorph: AI Assistance to Transform Lo-Fi Sketches to Higher Fidelities.” In: Proceed- ings of the 32nd Australian Conference on HCI. ozCHI’20. Sydney, Australia: Association for Computing Machinery. isbn: 978-1-4503-8975-4/20/12. doi: 10.1145/3441000.3441030.

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, and Matthias Jarke (2020). “Syn: Synthetic Dataset for Training UI Element Detector From Lo-Fi Sketches.” In: Proceedings of the 25th International Conference on Intelligent User Interfaces Companion. IUI ’20. Cagliari, Italy: Association for Computing Machinery, pp. 79–80. isbn: 9781450375139. doi: 10.1145/ 3379336.3381498. url: https://doi.org/10.1145/3379336.3381498.

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, and Matthias Jarke (2021). “UISketch: A Large-Scale Dataset of UI Element Sketches.” In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. Yokohama, Japan: Association for Computing Machinery.

Pandian, Vinoth Pandian Sermuga and Sarah. Suleri (2020). “NASA-TLX Web App: An Online Tool to Analyse Subjective Workload.” In: arXiv preprint arXiv:2001.09963.

PaperDraw. https://paperone.en.aptoide.com/. (Accessed on 17-03-2020).

Pencil. https://pencil.evolus.vn/. (Accessed on 18-03-2020).

Pencil Madness. http://pencilmadness.com/. (Accessed on 17-03-2020).

Perez, Medina, Luis Jorge, Jorge Luis Pérez Medina, Medina Perez, Luis Jorge, Jorge Luis Perez Medina, and Jorge Luis Pérez Medina (2016). “The UsiSketch Soware Architecture.” In: Romanian Journal of Human-Computer Interaction 9.4, pp. 305–333. url: http://hdl. handle . net / 2078 . 1 / 187342 % 20https : / / dial . uclouvain . be / pr / boreal / object / boreal:187342.

Pernice, Kara (2016). UX Prototypes: Low Fidelity vs. High Fidelity. https://www.nngroup. com/articles/ux-prototype-hi-lo-fidelity/. (Accessed on 12-04-2020).

Petrie, Jennifer N. and Kevin A. Schneider (2007). Mixed-Fidelity Prototyping of User Interfaces. Ed. by Gavin Doherty and Ann Blandford. Berlin, Heidelberg.

Pexels. https://www.pexels.com/. (Accessed on 18-04-2020).

Photon. https://design.firefox.com/photon/. (Accessed on 11-10-2020). 198 bibliography

Pidoco. https://pidoco.com/en. (Accessed on 18-03-2020).

POP by Marvel. https://marvelapp.com/pop/. (Accessed on 17-03-2020).

Porges, Stephen W and Evan A Byrne (1992). “Research methods for measurement of heart rate and respiration.” In: Biological psychology 34.2-3, pp. 93–130.

PowerMockup. https://www.powermockup.com/. (Accessed on 18-03-2020).

Precursor. https://precursorapp.com/. (Accessed on 18-03-2020).

Principle. https://principleformac.com/. (Accessed on 18-03-2020).

Proto.io. https://proto.io/. (Accessed on 18-03-2020).

ProtoPie. https://www.protopie.io/. (Accessed on 18-03-2020).

ProtoShare. http://www.protoshare.com/. (Accessed on 18-03-2020).

Protostrap. http://protostrap.com/. (Accessed on 18-03-2020).

Prott. https://prottapp.com/features/. (Accessed on 18-03-2020).

Pruitt, John and Tamara Adlin (2005). The Persona Lifecycle: Keeping People in Mind Through- out Product Design. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. isbn: 0125662513.

Pruitt, John and Jonathan Grudin (2003). “Personas: Practice and Theory.” In: Proceedings of the 2003 Conference on Designing for User Experiences. DUX ’03. San Francisco, California: ACM, pp. 1–15. isbn: 1-58113-728-1. doi: 10.1145/997078.997089. url: http://doi.acm. org/10.1145/997078.997089.

Pttrns-LLC (2012). Pttrns. https://pttrns.com/. (Accessed on 28-10-2018).

QuickBooks. https://designsystem.quickbooks.com/?refpage. (Accessed on 11-10-2020).

Rabin, Jo and Charles McCathieNevile (July 2008). Mobile Web Best Practices 1.0. (Accessed on 25-10-2018). url: https://www.w3.org/TR/mobile-bp/.

rapidui.io. http://rapidui.io/. (Accessed on 29-08-2020).

React Studio. https://reactstudio.com/. (Accessed on 17-03-2020). bibliography 199

Richard, Jocelyn, Jean-Marc Robert, Sébastien Malo, and Joël Migneault (2011). “Giving UI developers the power of UI design patterns.” In: Symposium on Human Interface. Springer, pp. 40–47. doi: 10.1007/978-3-642-21793-7_5.

Roscoe, Alan H (1984). Assessing pilot workload in flight. Tech. rep. Royal Aircra Establish- ment, Bedford United Kingdom.

Roscoe, Alan H (1992). “Assessing pilot workload. Why measure heart rate, HRV and respi- ration?” In: Biological psychology 34.2-3, pp. 259–287.

Roscoe, Alan H and Georges A Ellis (1990). A subjective rating scale for assessing pilot workload in flight: A decade of practical use. Tech. rep. Royal Aerospace Establishment, Farnborough United Kingdom.

Rubine, Dean (July 1991). “Specifying Gestures by Example.” In: SIGGRAPH Comput. Graph. 25.4, pp. 329–337. issn: 0097-8930. doi: 10.1145/127719.122753. url: http://doi.acm. org/10.1145/127719.122753.

Rudd, Jim, Ken Stern, and Scott Isensee (Jan. 1996). “Low vs. High-fidelity Prototyping Debate.” In: interactions 3.1, pp. 76–85. issn: 1072-5520. doi: 10.1145/223500.223514. url: http://doi.acm.org/10.1145/223500.223514.

Savah. https://www.savahapp.com/. (Accessed on 18-03-2020).

Scene Builder. https://gluonhq.com/products/scene-builder/. (Accessed on 18-03-2020).

Schrage, Michael (1999). Serious play: How the world’s best companies simulate to innovate. Harvard Business Press.

Seffah, A and H Javahery (2002). “On the usability of usability patterns.” In: Workshop entitled Patterns in Practice, CHI, pp. 1–2.

Seffah, Ahmed (2003). “Learning the ropes: human-centered design skills and patterns for soware engineers’ education.” In: Interactions 10.5, pp. 36–45.

Seffah, Ahmed (2015a). “HCI Pattern Capture and Dissemination: Practices, Lifecycle, and Tools.” In: Patterns of HCI Design and HCI Design of Patterns. Springer, pp. 219–242.

Seffah, Ahmed (2015b). Patterns of HCI Design and HCI Design of Patterns: Bridging HCI Design and Model-Driven Software Engineering. Springer. 200 bibliography

Segura, Vinícius C V B, Simone D J Barbosa, and Fabiana Pedreira Simões (2012). “UISKEI.” In: Proceedings of the International Working Conference on Advanced Visual Interfaces - AVI 2012. New York, New York, USA: ACM Press, p. 18. isbn: 978-1-4503-1287-5. doi: 10.1145/ 2254556.2254564. url: http://dl.acm.org/citation.cfm?doid=2254556.2254564.

Shanmuga Sundaram, Harish Balaji (2020). “Personify: UI Design Guidelines Library For Persona-Driven Prototyping.” MA thesis. RWTH Aachen University.

Sheibley, Mari (2013). Mobile Patterns. http://www.mobile-patterns.com/. (Accessed on 28-10-2018).

Shishkovets, Svetlana (2019). “Feature List and Prototype of Eve: A Sketch Based Prototyping Tool.” MA thesis. RWTH Aachen University.

Shitkova, Maria, Justus Holler, Tobias Heide, Nico Clever, and Jörg Becker (2015). “Towards Usability Guidelines for Mobile Websites and Applications.” In: Wirtschaftsinformatik, pp. 1603–1617. isbn: 978-3-00-049184-9.

Shneiderman, Ben (1998). Designing the User Interface: Strategies for Effective Human-Computer Interaction. 3rd. Boston, MA: Addison-Wesley Longman Publishing Co., Inc., p. 639. isbn: 0201694972. url: https://dl.acm.org/citation.cfm?id=523237.

Shopify Polaris. https://polaris.shopify.com/. (Accessed on 11-10-2020).

Silva, Thiago Rocha, Jean-Luc Hak, Marco Winckler, Olivier Nicolas, et al. (2019). “A com- parative study of milestones for featuring GUI prototyping tools.” In: CoRR abs/1906.01417. url: http://arxiv.org/abs/1906.01417.

Sketch. https://www.sketch.com/. (Accessed on 17-03-2020).

Sketch - Draw & Paint. https://sketch.en.aptoide.com/. (Accessed on 17-03-2020).

Sketchboard. https://sketchboard.io/. (Accessed on 17-03-2020).

SmartDraw. https://www.smartdraw.com/. (Accessed on 29-08-2020).

Smashing Magazine. https://www.smashingmagazine.com/. (Accessed on 11-10-2020).

Snapp. https://snapp.click/. (Accessed on 18-03-2020). bibliography 201

Soegaard, Mads (2018). “Mobile Web UX Design: Some Simple Guidelines.” In: The Ba- sics of User Experience Design. Interaction Design Foundation. Chap. 8, pp. 58–64. url: interaction-design.org.

Solid. https://solid.buzzfeed.com/. (Accessed on 11-10-2020).

Spearman, C (1904). “The Proof and Measurement of Association between Two Things.” In: The American Journal of Psychology 15.1, pp. 72–101. issn: 00029556. url: http://www. jstor.org/stable/1412159.

Squid. https://www.squidnotes.com/. (Accessed on 17-03-2020).

Stack Overflow. https://stackoverflow.design/product/guidelines/using- stacks/. (Accessed on 11-10-2020).

Strauss, Anselm and Juliet Corbin (1994). “Grounded Theory Methodology: An Overview.” In: Handbook of Qualitative Research 17, pp. 273–285.

Strayer, David L, Frank A Drews, and Dennis J Crouch (2006). “A comparison of the cell phone driver and the drunk driver.” In: Human factors 48.2, pp. 381–391. doi: 10.1518/ 001872006777724471.

STUDIO. https://studio.design/. (Accessed on 18-03-2020).

Suleri, Sarah, Yeganeh Hajimiri, and Matthias Jarke (2020). “Impact of using UI Design Patterns on the Workload of Rapid Prototyping of Smartphone Applications: An Exper- imental Study.” In: Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services. MobileHCI ’20. New York, NY, USA: Association for Computing Machinery. isbn: 9781450380522. doi: 10.1145/3406324.3410718. url: https://doi.org/10.1145/3406324.3410718.

Suleri, Sarah, Nilda Kipi, Linh Chi Tran, and Matthias Jarke (2019). “UI Design Pattern- Driven Rapid Prototyping for Agile Development of Mobile Applications.” In: Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services. MobileHCI ’19. New York, NY, USA: Association for Computing Machinery. isbn: 9781450368254. doi: 10.1145/3338286.3344399. url: https://doi.org/10.1145/ 3338286.3344399.

Suleri, Sarah, Vinoth Pandian Sermuga Pandian, Svetlana Shishkovets, and Matthias Jarke (2019). “Eve: A Sketch-based Soware Prototyping Workbench.” In: Extended Abstracts of 202 bibliography

the 2019 CHI Conference on Human Factors in Computing Systems. CHI EA ’19. New York, NY, USA: ACM, Lbw1410:1–lbw1410:6. isbn: 978-1-4503-5971-9. doi: 10.1145/3290607.3312994. url: http://doi.acm.org/10.1145/3290607.3312994.

Surface Studio 2 (2018). https://www.microsoft.com/en-us/surface. (Accessed on 19-04- 2020).

Symu.co. https://symu.co/. (Accessed on 18-03-2020).

Taleb, M, H Javahery, and A Seffah (2006). “Pattern-Oriented design composition and mapping for cross-platform Web applications.” In: The XIII International Workshop on Design, specification and verification of interactive systems, Spring Verlag, Trinity College Dublin, Ireland. doi: 10.1109/IRI.2007.4296608.

Tarasewich, Peter, Jun Gong, and Fiona Fui-Hoon Nah (2007). “Interface Design for Hand- held Mobile Devices.” In: AMCIS 2007 Proceedings. url: http://aisel.aisnet.org/ amcis2007/352.

Tattersall, Andrew J and Penelope S Foord (1996). “An experimental evaluation of instanta- neous self-assessment as a measure of workload.” In: Ergonomics 39.5, pp. 740–748.

Tayasui Sketches. https://tayasui.com/sketches/. (Accessed on 17-03-2020).

TensorFlow. TensorFlow. (Accessed on 10-02-2019). url: https://www.tensorflow.org/.

Tidwell, Jenifer (2010). Designing interfaces: Patterns for effective interaction design. O’Reilly Media, Inc.

Toxboe, Anders (2007). UI-Patterns. http://ui- patterns.com/patterns. (Accessed on 28-10-2018).

Tumult Hype. https://tumult.com/hype/. (Accessed on 29-08-2020).

Ubuntu. https://design.ubuntu.com/. (Accessed on 11-10-2020).

UMLet. https://www.umlet.com/. (Accessed on 29-08-2020).

Unger, Russ and Carolyn Chandler (2004). A Project Guide to UX Design: For user experience designers in the field or in the Making. New Riders.

Unsplash. https://unsplash.com/. (Accessed on 18-04-2020). bibliography 203

UserTesting (2019). 5 Best Prototyping Tools to Help UI/UX Designers Build Better Product- s/UserTesting Blog. https : / / www . usertesting . com / blog / prototyping - tools - and - testing. (Accessed on 29-11-2020).

UX Booth. https://www.uxbooth.com/. (Accessed on 11-10-2020).

UX-App. https://www.ux-app.com/. (Accessed on 18-03-2020).

UXPin (2019). https://www.uxpin.com/. (Accessed on 18-03-2020).

UXPin (2015). Mobile UI Design Patterns. url: http://studio.uxpin.com/ebooks/mobile- design-patterns/.

UXToolbox. http://www.softandgui.co.uk/pages/tour/UXToolbox{%}20Wireframing{%} 20Tool.aspx. (Accessed on 18-03-2020).

Van den Bergh, Jan, Deepak Sahni, Mieke Haesen, Kris Luyten, and Karin Coninx (2011). “GRIP: Get better Results from Interactive Prototypes.” In: Proceedings of the 3rd ACM SIGCHI symposium on Engineering interactive computing systems - EICS ’11, p. 143. doi: 10. 1145/1996461.1996508. url: http://dl.acm.org/citation.cfm?id=1996461.1996508.

Van Duyne, Douglas K, James A Landay, and Jason I Hong (2007). The design of sites: Patterns for creating winning web sites. Prentice Hall Professional.

Van Welie, Martijn and Hallvard Trætteberg (2000). “Interaction patterns in user interfaces.” In: 7th. Pattern Languages of Programs Conference, pp. 13–16. doi: 10.1007/978-1-4471- 0279-3_30.

Van Welie, Martijn, Gerrit C Van Der Veer, and Anton Eliéns (2001). “Patterns as tools for user interface design.” In: Tools for Working with Guidelines. Springer, pp. 313–324.

Vectr. https://vectr.com/. (Accessed on 18-03-2020).

Vicente, Kim J, D Craig Thornton, and Neville Moray (1987). “Spectral analysis of sinus arrhythmia: A measure of mental effort.” In: Human factors 29.2, pp. 171–182.

Vidulich, Michael A (1989). “The use of judgment matrices in subjective workload assess- ment: The subjective workload dominance (SWORD) technique.” In: Proceedings of the Human Factors Society Annual Meeting. Vol. 33. 20. SAGE Publications Sage CA: Los Angeles, CA, pp. 1406–1410. 204 bibliography

Vidulich, Michael A and Pamela S Tsang (1987). “Absolute magnitude estimation and relative judgement approaches to subjective workload assessment.” In: Proceedings of the Human Factors Society Annual Meeting. Vol. 31. 9. SAGE Publications Sage CA: Los Angeles, CA, pp. 1057–1061.

Vidulich, Michael A and Christopher D Wickens (1986). “Causes of dissociation between subjective workload measures and performance: Caveats for the use of subjective assess- ments.” In: Applied Ergonomics 17.4, pp. 291–296.

Vidullch, Michael A, G Frederic Ward, and James Schueren (1991). “Using the subjective workload dominance (SWORD) technique for projective workload assessment.” In: Human Factors 33.6, pp. 677–691.

Virzi, Robert A, Jeffrey L Sokolov, and Demetrios Karis (1996). “Usability Problem Iden- tification Using Both Low- and High-fidelity Prototypes.” In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. CHI ’96. New York, NY, USA: ACM, pp. 236–243. isbn: 0-89791-777-4. doi: 10.1145/238386.238516. url: http://doi.acm. org/10.1145/238386.238516.

Visual Paradigm. https://www.visual-paradigm.com/. (Accessed on 29-08-2020).

Visual Studio 2019. https://visualstudio.microsoft.com/vs/mac/. (Accessed on 18-03- 2020).

Walker, Miriam, Leila Takayama, and James A. Landay (2002). “High-Fidelity or Low-Fidelity, Paper or Computer? Choosing Attributes when Testing Web Prototypes.” In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting 46.5, pp. 661–665. issn: 1541-9312. doi: 10.1177/154193120204600513. url: http://journals.sagepub.com/doi/10.1177/ 154193120204600513.

Walmart. https : / / one . walmart . com / content / walmartbrandcenter / home / walmart - brand-center-0/walmart-brand-center-011.html. (Accessed on 11-10-2020).

Webflow. https://webflow.com/. (Accessed on 18-03-2020).

Weiss, Scott (2002). Handheld usability. New York, USA: Wiley, p. 292. isbn: 9780470852927.

Weld. https://www.weld.io/. (Accessed on 18-03-2020). bibliography 205

Wesson, J. L., N. L. O. Cowley, and C. E. Brooks (2017). “Extending a Mobile Prototyping Tool to Support User Interface Design Patterns and Reusability.” In: Proceedings of the South African Institute of Computer Scientists and Information Technologists. SAICSIT ’17. Thaba ’Nchu, South Africa: Association for Computing Machinery. isbn: 9781450352505. doi: 10.1145/3129416.3129444. url: https://doi.org/10.1145/3129416.3129444.

Wetchakorn, Thara and Nakornthip Prompoon (July 2015). “Method for mobile user inter- face design patterns creation for iOS platform.” In: 2015 12th International Joint Conference on Computer Science and Software Engineering (JCSSE). IEEE, pp. 150–155. isbn: 978-1-4799- 1966-6. doi: 10.1109/JCSSE.2015.7219787. url: http://ieeexplore.ieee.org/lpdocs/ epic03/wrapper.htm?arnumber=7219787.

Whimsical. https://whimsical.com/. (Accessed on 29-08-2020).

Williams, Cindy and Gordon Crawford (1980). Analysis of Subjective Judgment Matrices. Tech. rep. Rand Corp, California, U.S.

Williams, Robin (1993). The Non-Designer’s Design Book. Peachpit Press.

Winn, Tiffany and Paul Calder (2002). “Is this a pattern?” In: IEEE software 19.1, pp. 59–66.

Wireflow. https://wireflow.co/. (Accessed on 29-08-2020).

Wireframe.cc. https://wireframe.cc/. (Accessed on 18-03-2020).

WireframeSketcher. https://wireframesketcher.com/. (Accessed on 18-03-2020).

Wires. https://quirktools.com/wires/. (Accessed on 18-03-2020).

Xcode. https://developer.apple.com/xcode/. (Accessed on 18-03-2020).

Yeh, Yei-Yu and Christopher D Wickens (1988). “Dissociation of performance and subjective measures of workload.” In: Human Factors 30.1, pp. 111–120. yULM. https://yuml.me/. (Accessed on 29-08-2020).

Zendesk. https://garden.zendesk.com/. (Accessed on 11-10-2020).

Zeplin. https://zeplin.io/. (Accessed on 18-03-2020).

ZURB (2017). Pattern Tap. http://patterntap.com/patterntap. (Accessed on 13-10-2018).

PUBLICATIONSFORDISSERTATION

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, Christian Beecks, and Matthias Jarke (2020). “MetaMorph: AI Assistance to Transform Lo-Fi Sketches to Higher Fidelities.” In: Proceed- ings of the 32nd Australian Conference on HCI. ozCHI’20. Sydney, Australia: Association for Computing Machinery.

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, and Matthias Jarke (2020). “Syn: Synthetic Dataset for Training UI Element Detector From Lo-Fi Sketches.” In: Proceedings of the 25th International Conference on Intelligent User Interfaces Companion. IUI ’20. Cagliari, Italy: Association for Computing Machinery, pp. 79–80. isbn: 9781450375139. doi: 10.1145/ 3379336.3381498. url: https://doi.org/10.1145/3379336.3381498.

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, and Matthias Jarke (2021). “UISketch: A Large-Scale Dataset of UI Element Sketches.” In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. Yokohama, Japan: Association for Computing Machinery.

Pandian, Vinoth Pandian Sermuga and Sarah. Suleri (2020). “NASA-TLX Web App: An Online Tool to Analyse Subjective Workload.” In: arXiv preprint arXiv:2001.09963.

Suleri, Sarah, Yeganeh Hajimiri, and Matthias Jarke (2020). “Impact of using UI Design Patterns on the Workload of Rapid Prototyping of Smartphone Applications: An Exper- imental Study.” In: Proceedings of the 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services. MobileHCI ’20. New York, NY, USA: Association for Computing Machinery. isbn: 9781450380522. doi: 10.1145/3406324.3410718. url: https://doi.org/10.1145/3406324.3410718.

Suleri, Sarah and Matthias Jarke (2020). “Eve: Comprehensive Support to UI Prototyping.” Under review at ACM Transactions on Computer-Human Interaction (TOCHI).

Suleri, Sarah, Nilda Kipi, Linh Chi Tran, and Matthias Jarke (2019). “UI Design Pattern- Driven Rapid Prototyping for Agile Development of Mobile Applications.” In: Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and

207 208 publications

Services. MobileHCI ’19. Taipei, Taiwan: Association for Computing Machinery. isbn: 9781450368254. doi: 10 . 1145 / 3338286 . 3344399. url: https : / / doi . org / 10 . 1145 / 3338286.3344399.

Suleri, Sarah, Vinoth Pandian Sermuga Pandian, Svetlana Shishkovets, and Matthias Jarke (2019). “Eve: A Sketch-Based Soware Prototyping Workbench.” In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. CHI EA ’19. Glasgow, Scotland UK: Association for Computing Machinery, pp. 1–6. isbn: 9781450359719. doi: 10.1145/3290607.3312994. url: https://doi.org/10.1145/3290607.3312994. OTHERPUBLICATIONS

Jdeed, Midhat, Melanie Schranz, Alessandra Bagnato, Sarah Suleri, Gianluca Prato, Davide Conzon, Micha Sende, Etienne Brosse, Claudio Pastrone, and Wilfried Elmenreich (2019). “The CPSwarm Technology for Designing Swarms of Cyber-Physical Systems.” In: STAF (Co-Located Events), pp. 85–90.

Pandian, Vinoth Pandian Sermuga and Sarah Suleri (2020). “BlackBox Toolkit: Intelligent Assistance to UI Design.” In: arXiv preprint arXiv:2004.01949.

Pandian, Vinoth Pandian Sermuga, Sarah Suleri, and Matthias Jarke (2020). “Blu: What GUIs Are Made Of.” In: Proceedings of the 25th International Conference on Intelligent User Interfaces Companion. IUI ’20. Cagliari, Italy: Association for Computing Machinery, pp. 81–82. isbn: 9781450375139. doi: 10 . 1145 / 3379336 . 3381497. url: https : / / doi . org / 10 . 1145 / 3379336.3381497.

Pillai, Ajit G., Naseem Ahmadpour, Soojeong Yoo, A. Baki Kocaballi, Sonja Pedell, Vinoth Pan- dian Sermuga Pandian, and Sarah Suleri (2020). “Communicate, Critique and Co-Create (CCC) Future Technologies through Design Fictions in VR Environment.” In: Companion Publication of the 2020 ACM Designing Interactive Systems Conference. DIS’ 20 Compan- ion. Eindhoven, Netherlands: Association for Computing Machinery, pp. 413–416. isbn: 9781450379878. doi: 10 . 1145 / 3393914 . 3395917. url: https : / / doi . org / 10 . 1145 / 3393914.3395917.

Suleri, Sarah (2020). “Brainstorming 101: An Introduction to Ideation Techniques.” In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. CHI EA ’20. Honolulu, HI, USA: Association for Computing Machinery, pp. 1–4. isbn: 9781450368193. doi: 10 . 1145 / 3334480 . 3375045. url: https : / / doi . org / 10 . 1145 / 3334480.3375045.

209