Embedding Training Within Warnings Improves Skills of Identifying Phishing Webpages

Embedding Training Within Warnings Improves Skills of Identifying Phishing Webpages

810942HFSXXX10.1177/0018720818810942Human FactorsEmbedded Antiphishing Trainingresearch-article2018 Embedding Training Within Warnings Improves Skills of Identifying Phishing Webpages Aiping Xiong, Robert W. Proctor, Weining Yang, and Ninghui Li, Purdue University, Lafayette, Indiana, USA Objective: Evaluate the effectiveness of training INTRODUCTION embedded within security warnings to identify phishing webpages. Phishing is a social engineering attack that Background: More than 20 million malware and uses e-mail, social network webpages, and other phishing warnings are shown to users of Google Safe media to communicate messages intended to Browsing every week. Substantial click-through rate is persuade potential victims to perform certain still evident, and a common issue reported is that users actions or divulge confidential information for lack understanding of the warnings. Nevertheless, each the attacker’s benefit in the context of cyber- warning provides an opportunity to train users about phishing and how to avoid phishing attacks. security (Khonji, Iraqi, & Jones, 2013; Orgill, Method: To test use of phishing-warning instances Romney, Bailey, & Orgill, 2004). as opportunities to train users’ phishing webpage Because the website mimics that of a reputa- detection skills, we conducted an online experiment ble organization, victims are tricked into enter- contrasting the effectiveness of the current Chrome ing personal information and credentials, which phishing warning with two training-embedded warning are then stolen by the attackers. Damages from interfaces. The experiment consisted of three phases. In Phase 1, participants made login decisions on 10 phishing attacks include financial losses, expo- webpages with the aid of warning. After a distracting sure of privacy information, and reputational task, participants made legitimacy judgments for 10 dif- harm to companies. Phishing is estimated to ferent login webpages without warnings in Phase 2. To have resulted in about $30 million in damages to test the long-term effect of the training, participants U.S. consumers and businesses in 2017 (FBI, were invited back a week later to participate in Phase 2018). Beyond financial loss, users reported 3, which was conducted similarly as Phase 2. Results: Participants differentiated legitimate and reduced trust in people and the technology as a fraudulent webpages better than chance. Performance consequence of phishing attacks (Kelley, Hong, was similar for all interfaces in Phase 1 for which the Mayhorn, & Murphy-Hill, 2012). warning aid was present. However, training-embedded Because of the negative consequences of interfaces provided better protection than the Chrome phishing attacks, considerable effort has been phishing warning on both subsequent phases. devoted to devising methods to protect users Conclusion: Embedded training is a complemen- tary strategy to compensate for lack of phishing web- from them. Detection and prevention of phish- page detection skill when phishing warning is absent. ing scams is the first line of protection to stop Application: Potential applications include devel- attacks from reaching people. Computer scien- opment of training-embedded warnings to enable secu- tists have developed several automated tools for rity training at scale. phishing detection: (1) e-mail classification at server and client levels to filter phishing e-mails Keywords: cybersecurity, phishing, training, action on (e.g., Fette, Sadeh, & Tomasic, 2007); (2) web- cybersecurity, procedural knowledge site blacklists consisting of phishing URLs and IP addresses detected in the past (e.g., Google Address correspondence to Aiping Xiong, College of Safe Browsing; Whittaker, Ryner, & Nazif, Information Sciences and Technology, the Pennsylvania State University, E373 Westgate Building, University Park, 2010) or almost all possible variants of a URL PA 16802, USA; e-mail: [email protected] (e.g., Prakash, Kumar, Kompella, & Gupta, 2010); (3) heuristic solutions based on sets of HUMAN FACTORS Vol. 61, No. 4, June 2019, pp. 577 –595 rules from previous real-time phishing attacks to DOI: 10.1177/0018720818810942 detect zero-day (i.e., previously unknown) phish- Article reuse guidelines: sagepub.com/journals-permissions ing attacks (e.g., Zhang, Hong, & Cranor, 2007); Copyright © 2018, Human Factors and Ergonomics Society. (4) webpage visual-similarity assessments to 578 June 2019 - Human Factors block phishing websites (e.g., Fu, Liu, & Deng, webpages. We conducted an experiment to 2006). However, those tools and services do not address three research questions: protect against all phishing due to evolution of phishing attacks and the difficulty computers 1. What are the short- and long-term effects of train- have in accurately extracting the meaning of the ing that is embedded within a phishing warning? natural language messages in e-mails (Stone, 2. Which is the most effective way to present train- 2007). ing to help users learn skills of how to identify the When automatic detection fails, the user legitimacy of a webpage? makes the final decision on a webpage’s legiti- 3. Does presenting training-embedded warnings as macy (Proctor & Chen, 2015). Thus, researchers feedback of users’ actions facilitate the effect of developed decision-aid tools to warn users when training? a fraudulent website is detected. The tools include dynamic security skins (Dhamija & ACTION-ORIENTED PHISHING Tygar, 2005), browser toolbars (Herzberg & PROTECTION STRATEGIES Gbara, 2004), and web browser phishing warn- ings and secure sockets layer (SSL) warnings Phishing Warning (Carpenter, Zhu, & Kolimi, 2014; Felt et al., When warnings were presented to aid users’ 2015). Those tools remind users of potential decisions, users who clicked through the warn- risks passively or actively. Passive warnings ings showed a lack of understanding of the employ principles, such as colored icons or warnings (e.g., Bravo-Lillo, Cranor, Downs, & highlighting, which signal potential dangers to Komanduri, 2011; Dhamija, Tygar, & Hearst, users without interrupting their primary tasks 2006). These findings are somewhat unexpected (Chou, Ledesma, Teraguchi, & Mitchell, 2004; because most of the warning designs followed Herzberg & Gbara, 2004; Lin, Greenberg, Trot- guidelines to improve users’ understanding of ter, Ma, & Aycock, 2011). Active warnings cap- the risks, for example, using direct language ture users’ attention by forcing them to choose and symbols to describe explicit consequences one of the options presented by the warnings of the risk (Felt et al., 2015; Yang et al., 2017). (Egelman, Cranor, & Hong, 2008; Felt et al., Nevertheless, scrutiny of the information pre- 2015; Wu, Miller, & Garfinkel, 2006). sented in those warnings revealed a focus on Yet, these decision-aid tools have evidenced facts about phishing (e.g., the definition and ineffectiveness (e.g., Xiong, Proctor, Yang, & potential costs), also known as declarative Li, 2017) and usability problems (e.g., Sheng knowledge (Anderson, 2013). et al., 2009; Wu et al., 2006). Specifically, people Downs, Barbagallo, and Acquisti (2015) showed a lack of understanding of the decision- investigated differences between declarative aid warnings in general (e.g., Felt et al., 2015; knowledge about phishing and procedural Wu et al., 2006). Training is one promising knowledge of the actions to determine URL approach to address users’ lack of comprehen- legitimacy (Anderson, 2013). In an online role- sion, and a prior study provided evidence that play study, participants chose possible actions knowledge gained from training enhanced the for legitimate and fraudulent e-mails and possi- effectiveness of a phishing warning (Yang, ble actions for webpages following each e-mail’s Xiong, Chen, Proctor, & Li, 2017). Currently, link. Declarative knowledge was closely related there is little work on integrating phishing train- to participants’ self-reported predictions on ing and warning. We conjectured that such awareness, susceptibility and intentions, but research is essential because of (a) the inability procedural knowledge was the only predictor of to require the large population of internet users the users’ ability to adjust their risk decisions. to take classroom training, and (b) minimal Xiong et al. (2017) conducted a study, in a warning protection for zero-day attacks. laboratory setting with an eye-tracker, investi- Our aim in the current study was to under- gating why a passive warning (domain high- stand the effect of embedded training within lighting) is ineffective at helping users identify phishing warnings in helping users detect phishing phishing webpages. They based their study on EMBEDDED ANTIPHISHING TRAINING 579 the fact that the domain name embedded within goal. The results obtained in the warning-only the URL of a phishing site will always be differ- condition are similar to previous findings (e.g., ent from the legitimate one. Thus, the mismatch Felt et al., 2015), suggesting that security aware- between the real domain name and the imper- ness alone is not sufficient to protect users from sonated webpage serves as a reliable cue to phishing attacks. The power of using a combina- detect phishing attacks (Lin et al., 2011). tion of training and phishing warning to reduce Because users may overlook the domain name the likelihood of being phished provided evi- (Jagatic, Johnson, Jakobsson, & Menczer, 2007), dence that participants should not only be aware the domain of whichever site a user is currently

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    19 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us