Shaking the Usability Tree
Total Page:16
File Type:pdf, Size:1020Kb
BEHAVIOUR & INFORMATION TECHNOLOGY https://doi.org/10.1080/0144929X.2018.1541255 Shaking the usability tree: why usability is not a dead end, and a constructive way forward Simone Borsci a,b, Stefano Federici c, Alessio Malizia d,e and Maria Laura De Filippisf aDepartment of Cognitive Psychology and Ergonomics, Faculty of Behavioral Management and Social sciences, University of Twente, Enschede, Netherlands; bNational Institute for Health Research, London IVD Co-operative, Faculty of Medicine, Department of Surgery & Cancer, Imperial College, London, UK; cDepartment of Philosophy, Social and Human Sciences and Education, University of Perugia, Perugia, Italy; dSchools of Creative Arts, University of Hertfordshire, Hertfordshire, UK; eFaculty of Logistics, Molde University College, Molde, Norway; fCognitiveLab Research Group, Perugia, Italy ABSTRACT ARTICLE HISTORY A recent contribution to the ongoing debate concerning the concept of usability and its measures Received 6 June 2018 proposed that usability reached a dead end – i.e. a construct unable to provide stable results and to Accepted 22 October 2018 unify scientific knowledge. Extensive commentaries rejected the conclusion that researchers need KEYWORDS to look for alternative constructs to measure the quality of interaction. Nevertheless, several Human–machine interface; practitioners involved in this international debate asked for a constructive way to move forward fi fi interaction design; ISO 9241- the usability practice. In fact, two key issues of the usability eld were identi ed in this debate: 11; usability; usability factors; (i) knowledge fragmentation in the scientific community, and (ii) the unstable relationship usability testing among the usability metrics. We recognise both the importance and impact of these key issues, although, in line with others, we may not agree with the conclusion that the usability is a dead end. Under the light of the international debate, this work discusses the strengths and weaknesses of usability construct and its application. Our discussion focuses on identifying alternative explanations to the issues and to suggest mitigation strategies, which may be considered the starting point to move forward the usability field. However, scientific community actions will be needed to implement these mitigation strategies and to harmonise the usability practice. 1. Introduction usability was recently proposed by Tractinsky (2018). This author suggested that the framework of the ISO The formal umbrella construct of usability was defined 9241-11 is collapsing and that the usability is a dead in 1998 by the International Standard for Organisation, end. Tractinsky identified two main theoretical reasons as follows: ‘The extent to which a product can be used to claim that the construct is falling apart, that we may by specified users to achieve specified goals with effec- summarise as follows: tiveness, efficiency and satisfaction in a specified con- text of use’ (ISO 9241-11, 1998). This definition aims to bridge and connect different definitions and stan- (i) The broad characterisation of the usability con- dards of interaction design. Moreover, this umbrella struct. Usability is not well characterised by the provides a common framework of usability engineering ISO 9241-11 definition, namely the umbrella con- (Bevan 1995; Bevan et al. 2016; ISO 9241-11, 1998). struct. In fact, umbrella constructs are usually too Before and after the formalisation of the ISO 9241-11 broad and weak to adequately serve the scope of a (1998), experts have attempted to critically discuss scientific community. The uncertainty around the and extend the theoretical framework of usability (e.g. construct of usability is a barrier to accumulate Quesenbery 2003). and to properly organise the disciplinary knowledge. The history of interaction design and evaluation is For this reason, the scientific usability community is populated by alternative, and sometimes conflicting, fragmented and experts are still debating the con- constructs and definitions of usability. A detailed revi- struct of usability and its assessment measures – sion of the issues associated to the conceptualisation of i.e. issues of the fragmentation of knowledge. CONTACT Simone Borsci [email protected] Department of Cognitive Psychology and Ergonomics, University of Twente, 5 Drienerlolaan, 7522 NB Enschede, Netherlands © 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-nd/ 4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way. 2 S. BORSCI ET AL. (ii) The unclear representation and operationalisa- We acknowledge that both the issues (the fragmen- tion of the usability construct. Experts tend to tation of knowledge and the unstable relationship operationalise their evaluation (protocols) in differ- among the usability metrics) are key challenges for ent ways by using a wide range of methods to assess the field usability. Tractinsky’s perspective (2018)is different variables. This is a limitation to perform that these issues originate from the uncertainty associ- appropriate replication of findings and comparative ated to the weak umbrella construct of the ISO 9241- analyses of usability evaluation studies (Hornbæk 11. Tractinsky suggests that experts have to look for et al. 2014). Evidence gathered in comparative an alternative (and hopefully more) robust set of evalu- studies (Frøkjær, Hertzum, and Hornbæk 2000; ation metrics yet to be identified. Therefore, this author Hornbæk 2006; Hornbæk and Law 2007; Sauro proposes to abandon or to significantly revise the cur- and Lewis 2009; Hornbæk et al. 2014) do not com- rent construct. pletely support the theoretical construct of usability. The present work will take a different approach from In fact, studies reported an unstable and weak the commentaries cited above; in fact, we will not reject relationship among the dimensions of efficiency, or agree with the reasoning and the conclusion of Trac- effectiveness, and satisfaction – i.e. the issue of the tinsky, but we will attempt to define a constructive way relationship among the usability metrics. forward. To achieve this aim the present work critically dis- Several authors (Bardzell 2018; Bertelsen 2018; Hertzum cusses the two issues identified by Tractinsky and seeks 2018; Hornbæk 2018; Reeves 2018; Roto 2018; Stage for evidence to support alternative perspectives on the 2018) commented on Tractinsky’s proposal by mostly causes of knowledge fragmentation and the unstable rejecting the conclusion that usability is a dead end, relationship between the dimensions of the construct. but also recognising the need of a way to progress in a Mitigation strategies against the issues identified above constructive way the usability field. will be proposed to harmonise the fragmented field of Figure 1. Structure of the work and key contents of the four main sections. BEHAVIOUR & INFORMATION TECHNOLOGY 3 usability and to enable appropriate ways to replicate and efficiency, effectiveness, and satisfaction with a span on compare usability studies. average from 0.25 to 0.6 e.g. in one study the level of corre- Figure 1 provides a graphic insight of the contents of lation among the metrics to measure efficiency, eff ective- the four main sections of this work. ness, and satisfaction may result lower (or higher) than the correlation reported in other studies (Frøkjær, Hert- zum, and Hornbæk 2000;Hornbæk2006; Hornbæk and 2. Theoretical issues Law 2007;SauroandLewis2009; Hornbæk et al. 2014). 2.1. The broad characterisation of the usability The variability of the relationship among the usability construct: knowledge fragmentation dimensions is presented by Tractinsky (2018) as the ulti- mate evidence of failure of the usability construct. How- fi Umbrella de nitions and terms are typical of multidisci- ever, authors of these comparative studies have fi fi plinary elds. These de nitions usually aim to describe underlined that the variability in the relationship among complex phenomena to bring together the perspective the usability dimensions is due to multiple factors, such ff of experts from di erent backgrounds. Umbrella con- as: which methods are used and how, and also how data structs are developed by communities to provide experts are reported by professionals to ensure that results can be with a common set of objectives within a shared theoreti- replicated and compared (Frøkjær, Hertzum, and Horn- fi cal framework. Examples of umbrella de nitions, terms, bæk 2000; Hornbæk 2006; Hornbæk and Law 2007; ff and frameworks are quite common in literature of di er- Sauro and Lewis 2009). In tune with Hornbaek and col- ent domains e.g. pervasive developmental disorders leagues (2014), ‘Replicability’ is defined as the possibility (Caronna, Milunsky, and Tager-Flusberg 2008); disabil- to confirm, expand, or generalise previous findings. ity (WHO 2001); advanced nursing practice (Lowe et al. Tractinsky’s(2018) assumption is that experts tend to 2012); psychology of attitude (Eagly and Chaiken 2007); operationalise the usability assessment in different ways Public Service Motivation (Vandenabeele, Scheepers, because there is no clear relationship between the con- and Hondeghem 2006), etc. struct and its measures. We agree that