<<

Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 2021-4-28. doi:10.1186/s42400-021-00094-6

RESEARCH Social Engineering in Cybersecurity: A Domain Ontology and Application Examples

Zuoguang Wang1,2*, Hongsong Zhu1,2*,PeipeiLiu1,2 and Limin Sun1,2

Abstract Social engineering has posed a serious threat to cyberspace security. To protect against social engineering attacks, a fundamental work is to know what constitutes social engineering. This paper first develops a domain ontology of social engineering in cybersecurity and conducts ontology evaluation by its knowledge graph application. The domain ontology defines 11 concepts of core entities that significantly constitute or a↵ect social engineering domain, together with 22 kinds of relations describing how these entities related to each other. It provides a formal and explicit knowledge schema to understand, analyze, reuse and share domain knowledge of social engineering. Furthermore, this paper builds a knowledge graph based on 15 social engineering attack incidents and scenarios. 7 knowledge graph application examples (in 6 analysis patterns) demonstrate that the ontology together with knowledge graph is useful to 1) understand and analyze social engineering attack scenario and incident, 2) find the top ranked social engineering threat elements (e.g. the most exploited human vulnerabilities and most used attack mediums), 3) find potential social engineering threats to victims, 4) find potential targets for social engineering attackers, 5) find potential attack paths from specific attacker to specific target, and 6) analyze the same origin attacks. Keywords: Social engineering attack; Cyber security; Ontology; Knowledge graph; Attack scenarios; Threat analysis; Attack path; Attack model; Taxonomy; Composition and structure

1 Introduction brute-force and software vulnerabilities exploit, social In the context of cybersecurity, social engineering de- engineering exploits human vulnerabilities to bypass scribes a type of attack in which the attacker exploit or break through security barriers, without having to human vulnerabilities (by means such as influence, combat with firewall or antivirus software by deep cod- , , manipulation and inducing) to ing. 2) For some attack scenarios, social engineering breach the security goals (such as confidentiality, in- can be as simple as making a phone call and imper- tegrity, availability, controllability and auditability) of sonating an insider to elicit the classified information. cyberspace elements (such as infrastructure, data, re- 3) Especially in past decades when defense mainly fo- source, user and operation). Succinctly, social engi- cus on the digital domain yet overlooks human factors neering is a type of attack wherein the attacker ex- in security. As the development of security technology, ploit human vulnerability through social interaction classical attacks become harder and more and more to breach cyberspace security [1]. Many distinctive attackers turn to social engineering. 4) Human vulner- features make social engineering to be a quite pop- abilities seem inevitable, after all, there is not a cyber ular attack in hacker community and a serious, uni- system doesn’t rely on humans or involve human fac- versal and persistent threat to cyber security. 1) Com- tors on earth and these human factors are vulnerable pared to classical attacks such as password cracking by obviously or can be largely turned into security vulner- abilities by skilled attackers. Moreover, social engineer- *Correspondence: [email protected]; [email protected] ing threat is increasingly serious along with its evolu- 1School of Cyber Security, University of Chinese Academy of Sciences, tion in new technical and cyber environment. Social Beijing, CN engineering gets not only large amounts of sensitive in- 2Beijing Key Laboratory of IoT Information Security Technology, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, CN formation about people, network and devices but also Full list of author information is available at the end of the article more attack channels with the wide applications of So-

This article has been accepted for publication in a future issue of Cybersecurity (ISSN: 2523-3246), but has not been fully edited. Content may change prior to final publication. Citation information, DOI: 10.1186/s42400-021-00094-6 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 2 of 20

cial Networking Sites (SNSs), Internet of Things (IoT), knowledge graph. Section 6 is the discussion. Section 7 Industrial Internet, mobile communication and wear- concludes the paper. able devices. And large part of above information is open source, which simplifies the information gather- 2 Methodology to develop domain ing for social engineering. Social engineering becomes ontology more ecient and automated by technology such as There is no single correct way or methodology for de- machine learning and artificial intelligence. As a result, veloping ontologies [2]. Since ontology design is a cre- a large group of targets can be reached and specific vic- ative process and many factors will a↵ect the design tims can be carefully selected to craft more creditable choices, such as the potential applications of the on- attack. The spread of social engineering tools decrease tology, the designer’s understanding and view of the the threat threshold. Loose oce policy (bring your domain, di↵erent domain features, anticipations of the own device, remote oce, etc.) leads to the weakening ontology to be more intuitive, general, detailed, exten- of area-isolation of di↵erent security levels and cre- sible and / or maintainable. ates more attack opportunities. Targeted, large-scale, In this paper, we design the methodology to develop robotic, automated and advanced social engineering domain ontology of social engineering based on the attack is becoming possible [1]. method reported in work [2] with some modification. To protect against social engineering, the fundamen- Prot´eg´e5.5.0 [3] is used to edit and implement the tal work is to know what social engineering is, what en- ontology. It should be noted that ”entity” in real word tities significantly constitute or a↵ect social engineer- are described as ”concept” in ontology and ”class” in ing and how these entities relate to each other. Study Prot´eg´e; ”relation” is described as ”object property” [1] proposed a definition of social engineering in cyber- in Prot´eg´e. The methodology is described as Figure 1. security based on systematically conceptual evolution analysis. Yet only the definition is not enough to get Determine the Consider reusing insight into all the issue above, and further, to server as domain, purpose existing ontologies a tool for analyzing social engineering attack scenarios and scope or incidents and providing a formal, explicit, reusable knowledge schema of social engineering domain. Enumerate important Ontology is a term comes from philosophy to de- terms in the ontology scribe the existence of beings in the world and adopted in informatics, semantic web, and Artificial Intelligence (AI) fields, in which an on- Define core concepts, Define relations, concept taxonomy relation description tology is a formal, explicit description of knowledge as and description and characteristic a set of concepts within a domain and the relation- ships among them (i.e. what entities exist in a domain revise and how they related). It defines a common vocab- Define other Result: ontology Validate descriptions e.g. rules, ulary for researchers who need to share information annotations, axioms and includes definitions of basic concepts in the do- main and their relations [2]. In an ontology, semantic Figure 1 Overview of methodology to develop domain ontology information and components such as concept, object, of social engineering relation, attribute, constraints and axiom are encoded or formally specified, by which an ontology is machine- (1) Determine the domain, purpose and scope. readable and has capacity for reasoning. In this way, As described before, the domain of the ontology is ontology not only introduce a formal, explicit, share- social engineering in cybersecurity. The purpose of the able and reusable knowledge representation but also ontology, i) for design is to present what entities signif- can add new knowledge about the domain. icantly constitute or a↵ect social engineering and how Thus, we propose a domain ontology of social engi- these entities relate to each other, ii) and for appli- neering to understand, analyze, reuse and share do- cation is to server as a tool for understanding social main knowledge of social engineering. engineering, analyzing social engineering attack sce- Organization: Section 2 describes the the back- narios or incidents and providing a formal, explicit, ground material and methodology to develop domain reusable knowledge schema of social engineering do- ontology. Section 3 presents the material and ontology main. Thus, social engineering itself as a type of attack, implementation. Section 4 is the result: domain ontol- measures regarding social engineering defense will not ogy of social engineering in cybersecurity. Section 5 be included here although they are important. Defense is the evaluation and application of the ontology and will be the theme in our future work. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 3 of 20

(2) Consider reusing existing ontologies. We did a systematic literature survey on social engi- neering and accumulated a literature which contains 450+ studies from 1984.9 (time of the earliest literature available where the term ”social engineer- ing” was found in cybersecurity [1]) to 2020.5.[1] Few work focus on the social engineering ontology, yet a lot of terms can be obtained from literature survey. (3) Enumerate important terms in the ontology. ”Initially, it is important to get a comprehensive list of terms without worrying about overlap between con- cepts they represent, relations among the terms ...” [2]. These terms are useful to intuitively and quickly get a sketchy understanding on a domain, and helpful to develop a core concepts set after due consideration. A total of 350 relevant terms are enumerated from the Figure 2 Edit concepts and their description literature database mentioned in (2). Table 1 shows these terms in a compact layout by length order. [2] The next two steps are the most important steps in motivation is the factors that motivate (incent, drive, the ontology design process [2]. cause or prompt) the attacker to conduct a social en- (4) Define core concepts, concept taxonomy and de- gineering attack; thus, a concise relation ”motivate” scription. from ”attack motivation” to ”attacker” can be cre- In work [2], this step is to create the class hierar- ated. And to be more compatible, two sub-relation chy for a single concept ”Wine”. However, the ”class, ”incent” and ”drive” or another equivalent relation sub-class” hierarchy is a structure typically used to can be added. In Prot´eg´e, these relations are edited classification, in which only the relation ”is a” or ”is in the ”Object properties” tab. For above example, type of” is described. This is not the purpose of this ”motivate” as an Object property is created; ”Attack paper. Thus, di↵erently, we define a set of concepts for Motivation” is its Domain and ”Attacker” is its Range. entities which significantly constitute or a↵ect social Because it represents that a class points to another dif- engineering domain and discuss their taxonomy. Then, ferent class, the relation characteristic ”Irreflexive” is we define more expressive relations among concepts in set. As Figure 3 shows. next step. For each core concept, a definition is provided and relevant synonym terms are mentioned, to facilitate the reuse and sharing of domain knowledge. For ex- ample, attacker (a.k.a. social engineer) is the party to conduct social engineering attack; it can be an indi- vidual or an organization, and internal or external. In Prot´eg´e, these concepts are edited in the ”Classes” tab. Two Classes ”Attacker” and ”Social Engineer” are cre- ated and because they represent the same class (con- cept), a description (class axiom) ”Equivalent To” is set between them in the ”Description” tab. As Figure 2 shows. (5) Define relations, relation description and charac- teristic. Figure 3 Edit relations, relation description and characteristic This step we create the relations among concepts based on their definitions. Some relations directly ex- (6) Define other descriptions. pressed in the definition while some may be implicit Besides above, other descriptions can be added, such and need a explicit description. For example, attack as annotations, axioms, rules. Examples are as follows. For class ”Attacker”, its definition can be added as a [1] The literature database was submitted as supplementary ma- comment in Annotations tab with ”rdfs:comment”, to terial for review. [2]Term lists organized by alphabetical order and semantic facilitate conceptual understanding and later debug. groups were submitted as supplementary material for review. Axioms are statements that are asserted to be true. For Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 4 of 20

Table 1 Terms related to social engineering in cybersecurity

APT groups Facebook Instagram person name attack vector company partner reciprocity norm mobile application posts in social media fun hacker integrity server name central route confidentiality religion belief network disruption similarity and liking war hubris gluttony interests take effect desk sniffing controllability shoulder surfing penetration tester vulnerability exploit XSS induce humility prejudice attach files eavesdropping decision making social relations perform attack intellectual challenge bias letter identity principle attack model employee name deindividuation website phishing source credibility internal phone numbers card medium kindness secretary attack skill impersonation direct approach adjacent overhear telephone operator malicious popup window CSRF motive laziness self-love auditability item dropping dumpster diving attack motivation trust relationship obtain physical access envy piston LinkedIn terrorism availability launch attack economic profit behavioral habits authoritative voice reputation destruction fear spying openness user name carelessness name-dropping fake mobile app computer operator cultural disruption social engineering bot goal target password compliment email footer reverse sting group influence conscientiousness data exfiltration social exchange theory hoax victim phishing conformity email format security risk instant message data modification of service software vulnerability KeeK baiting phreaker contractor equivocation self interest office snooping deceptive website and feeling thought and expression lust charity pleasure diffidence extraversion time pressure craft attack drive-by download executive assistant vulnerability analysis name clients politics excitement face-to-face trojan attack self-disclosure drive-by-pharming individual attacker portable storage drives scam disgust RFID tag flirtation friendliness trojan device social disorder external attacker intuitive judgement questionnaire surveying SNSs friends scarcity heuristics inexperience vulnerability social engineer external pressure neurophysiological Social Networking Sites Google+ smishing job title watering hole social software facial expression organizational logo the quest for knowledge cloud hobbies software moral duty interruption attack pattern thoughtlessness instant messenger unauthorized access administrative assistant dread manager strategy motivation attack purpose application name internal attacker cognitive dissonance organizational structure email manuals surprise persuasion IP addresses build relation attack framework IT infrastructure disgruntled employee voice mail systems vendor greed partner sympathy pretending manipulation financial gain attack technique network intrusion facial action coding commitment and consistency phisher trailing pretexting masquerading framing effect bystander effect personal interest interesting malwares human resources department lingo picture trashing road apple new employee identity thief physical presence network interception reverse social engineering photo profile weakness attack goal piggybacking image spoiling data destruction physical sabotage rapport relationship social responsibility norm prank purpose authority attack path quid pro quo mobile devices data fabrication political purpose system administrator diffusion of responsibility Skype QR code bluetooth attack plan receptionist mobile website e-mail addresses social validation Voice over IP (VoIP) short message service (SMS) sloth revenge calendars connections social proof movable device effect mechanism sports fanaticism accounting department Elaboration Likelihood Model trick sadness credulity distraction stereotyping pop-up windows financial return technical support attacker organization computer hardware manufacturer video tension curiosity elicitation thinking set security guard foot-in-the-door attack consequence competitive advantage computer software manufacturer weibo Twitter deception trust theory social network IT professionals creating confusion fixed-action patterns telephone system administrator wrath vishing happiness helpfulness agreeableness spear phishing mental shortcuts employee functionsimpression management low level of need for cognition apathy website help desk indifferent attack medium urgent request micro expression fake business card information gathering increasing the number of friends attack whaling ignorance information attack method attack approach peripheral route family information instant communication IVR (Interactive Voice Response) awards attacker impulsion neuroticism attack target attack strategy person-to-person gather information language and thinking interpersonal deception theory (IDT) Flickr courtesy influence overloading attack threat attacker group phone, telephone habitual behaviors organizational policy integrative model of organizational trust relation ”motivate”, we can create an inverse relation this reasoning validation. Further, we create instances ”motivated by” and then set the description (object as the actual data to conduct a deductive validation, property axiom) ”Inverse Of” against ”motivate”, to as Figure 4 shows. This is an intuitive method to test facilitate the like ”attacker is mo- whether the ontology (e.g. the rules) is e↵ective, and it tivated by certain attack motivation”. Ontology can also provides a way helpful to adjust descriptions and also generate new knowledge by reasoning with rules. revise the ontology to achieve the purpose previously. Assume that ”di↵erent attackers are regarded as from the same attack organization if they motivated by the same motivation and attack the same victim”, then the following rule can be defined to implement the reasoning. Rule: motivate(?m, ?a) attack(?a,?v) motivate(?m, ?b) attack(?b,?v) ^di↵erentFrom(?a,^ ?b) same attack^ organization(?a,^ ?b). As Figure 4 shows.! (7) Validate and revise. After defining the concepts, relations and related de- scriptions, a domain ontology is created. Yet it is initial and imperfect. Minor mistakes such as misplacement and typing error may be occurred when large amount of items existed. Illogical or contradictory descriptions may be defined. Some class, relations or descriptions may be absent or superfluous. Thus, an iterative pro- cess is necessary for ontology development, validation Figure 4 Define and apply rules to knowledge reasoning and revision. By virtue of the ontology is formal and explicit en- (8) Result: Ontology. coded, any faults that cause logical inconsistency can Finally, a domain ontology of social engineering is be found. The built-in reasoner HermiT is used for developed after iterative revision and validation. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 5 of 20

3 Material and ontology implementation individual The background material regarding literature and taxonomy 1 group terms have been mentioned in Section 2 and we will organization Attacker internal not repeat them here. This section presents the key taxonomy 2 material and procedures for the ontology implemen- external real person tation, i.e. defining the concepts, relations and other taxonomy 3 descriptions related. virtual human role

Figure 6 Taxonomy of attacker (social engineer) 3.1 Define core concepts in the domain ontology This subsection details 11 core concepts correspond- ing to entities that significantly constitute or a↵ect so- 3.1.2 Attack Motivation cial engineering domain. For each concept, the concept Attack motivation is the factors that motivate (incent, definition, synonym term, taxonomy and some other drive, cause or prompt) the attacker to conduct a so- properties are described. Figure 5 shows these entities cial engineering attack. It can be intrinsic or extrin- (concepts). The circular arrow represents an approxi- sic. Considering that this simple taxonomy does not mate attack cycle for typical attack scenarios: 1) the seem to be significantly helpful to the social engineer- attacker motivated by certain factors 2) to gather spe- ing analysis, a common list of attack motivations in cific information, formulate attack strategy, craft at- social engineering may be more intuitive. It includes tack method 3) and then through certain medium the but is not limited to: 1) financial gain [4], 2) competi- attack method is performed and the attack target is in- tive advantage [5], 3) revenge [4], 4) external pressure, teracted with 4) to exploit their vulnerabilities which 5) personal interest, 6) intellectual challenge, 7) in- take e↵ect and lead to attack consequences; 5) the con- creasing followers or friends in SNSs, 8) image spoiling sequence feed back to the attack goal predetermined (denigration, reputation destruction, stigmatization), to satisfy the attack motivation. 9) prank, 10) fun or pleasure, 11) politics, 12) war, 13) religious belief, 14) fanaticism, 15) social disorder, 16) cultural disruption [6], 17) terrorism, 18) espionage, Attacker Social Engineering 19) security test. AttackMotivation Information

Attack Goal Attack Strategy 3.1.3 Attack Goal and Object Social Engineering The attack goal (a.k.a. attack purpose) is something Attack in Cybersecurity Attack Method Consequence that the attacker wants to achieve by specific attack Attack Medium Effect Mechanism methods so that the attack motivation can be satis- (Social Interaction) Human Attack Target / fied. For social engineering, it is some kinds of breach- Vulnerability Victim ing against cyberspace security. In general, to breach cyberspace security is to breach the security goals Figure 5 Core entities (concepts) in social engineering domain (confidentiality, integrity, availability, controllability, auditability, etc.) of the four basic elements of cy- berspace (i.e. attack object) [1]. These four basic el- ements are Carrier (the infrastructure, hardware and 3.1.1 Attacker software facilities of cyberspace), Resources (the ob- For social engineering, the attacker (a.k.a. social en- jects, data content that flows through the cyberspace), gineer) is the party to conduct a social engineering Subjects (the main body roles and users, including hu- attack; it is typically motivated by certain factors dis- man users, organizations, equipment, software, web- cussed in Section 3.1.2. Social engineering attackers sites, etc.), and Operations (all kinds of activities appear in various forms in reality, such as hackers, of processing Resources, including creation, storage, phreakers, phishers, disgruntled employees, identity change, use, transmission, display, etc.) [7, 8]. For com- thieves, penetration testers, script kiddies, malicious plex attack scenarios, there may be sub-goals (precon- users. Di↵erent criteria can also be used for the at- dition) exist, which themselves may not breach the cy- tacker’s taxonomy. The attacker identified as an indi- bersecurity. vidual person is familiar to the public, yet it does not Social engineering attack goal includes but is not have to be an individual. The attacker can also be a limited to: 1) network intrusion, interception or dis- group or an organization. The attacker can be a real ruption, 2) gain unauthorized access to information or person, or a virtual human role (e.g. a bot), and it can systems, 3) denial of service, 4) data exfiltration, mod- be from internal or external. Figure 6. ification, fabrication or destruction, 5) infrastructure Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 6 of 20

sabotage, 6) obtain physical access to restricted areas. phone numbers, 24) email information (address, for- Thus, it can be simply classified as above categories or mat, footer, etc.), 25) username, 26) password, 27) use other taxonomies as Figure 7 shows. network information, 28) computer name, 29) IP ad- dresses, 30) server name, 31) application information,

network intrusion, 32) version information, 33) hardware information, 34) interception or disruption IT infrastructure information, 35) building structure, gain unauthorized access 36) location information. to information or systems Figure 8 presents a taxonomy based on what space denial of service intuitive the information describes, in which the last level may data exfiltration, modification, taxonomy be more intuitive. Other taxonomies can be also work- fabrication or destruction infrastructure sabotage able, such as publicly accessible information, restricted obtain physical access information; personal information, social relations in- to restricted areas formation and other various environments (cyber, cul- Attack Goal ... tural, physical) information. and Object confidentiality integrity to breach individual identification availability security goals psychological characters individual controllability personality trait taxonomy 2 information auditability behavior and habits ... describe ... carrier social space interpersonal relations to breach cyberspace resources organization information social elements (object) subjects job information information taxonomy 3 operations communication information ... Social ... Engineering account information Figure 7 Taxonomy of social engineering attack goal Information network information computer information describe application, service cyber space software information environment hardware information 3.1.4 Social Engineering Information information infrastructure information In many attack scenarios, the success of social en- ... gineering relies heavily on the information gathered, building structure describe location information such as personal information of the targets (victims), physical space organization information, network information, social ... relation information. In a broad sense, every bit of in- Figure 8 Taxonomy of social engineering information formation posted publicly or leaked in cyberspace or in reality might provide attackers the resource, such as to learn the environment, to discover targets, to find 3.1.5 Attack Strategy vulnerable human factors and cyber vulnerabilities, to Attack strategy is a plan, pattern, or guidance of ac- formulate attack strategy, and to craft attack methods. tions formulated by the attacker for certain attack This is also a feature of social engineering compared goal. It is necessary especially for complex social en- with classical computer attack. Thus, this paper use gineering attacks. Usually, social engineering attackers ”social engineering information” to represent any in- formulate the attack strategy based on their compre- formation that helps the attacker to conduct a social hensive understanding on the attack situation, such engineering attack. as resources, environments, targets, vulnerabilities and Social engineering information includes but is not mediums. There are two common social engineering limited to: 1) person name, 2) identity 3) photograph, strategies in literature: forward (usual) strategy and 4) habits and characteristics, 5) hobbies or interests, reverse strategy. In forward attack strategy, the at- 6) job title, 7) job responsibility, 8) schedule, 9) rou- tacker directly contacts the targets and delivers attack tines, 10) new employee, 11) organizational structure, payloads to them, waiting the targets to trigger the at- 12) organizational policy, 13) organizational logo, 14) tack and be compromised. However, in reverse social company partner, 15) lingo, 16) manuals, 17) inter- engineering, the targets are prompted to contact the personal relations, 18) family information, 19) profile attacker actively for a request or help, and the attacker in SNSs, 20) posts in social media, 21) connections usually pretends to be a party of legitimate, authori- in SNSs, 22) SNSs group information, 23) (internal) tative, expert or trustworthy in advance. As a result, Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 7 of 20

smishing+DoS influence a higher degree of trust is established and the tar- also spear phishing+ deception gets are more likely to be attacked. E.g. The attacker common CSRF first makes a network failure and then pretends to be persuasion skills phishing+XSS manipulation used in a technical support sta↵; when the targets seek for a combined phishing+ other method inducing help, the attacker convinces them with certain excuses drive-by download human methods ... based into revealing the password or installing a malicious social engineering method pretending based APT software. impersonation From the duration perspective, attack strategy can ... masquerading be persistent strategy or short-term strategy. Some name-dropping shoulder surfing other categories are also helpful to label the attack authoritative voice piggybacking strategies, as Figure 9 shows. using awards Attack trailing Method quid pro quo (phone) pretexting forward strategy intimidation vishing common taxonomy usual strategy equivocation used in (email) phishing reverse strategy creating confusion website phishing auxilliary persistent strategy flirtation smishing Attack duration taxonomy tricks computer confidence trick spear phishing Strategy short-term strategy based trust building whaling targeted strategy method other strategy rapport building WiFi phishing multiple targets strategy categories or labels compliments, flattery trojan attack progressive strategy create urgent context baiting using fake name card watering hole Figure 9 Taxonomy of social engineering attack strategy ......

Figure 10 Taxonomy of social engineering attack method 3.1.6 Attack Method When the attack strategy existed, attack method is generally according to or guided by it. Attack method The attacker applies attack method to the targets, and is the way, manner or means of carrying an attack out; they become victims once their vulnerabilities were ex- the attacker crafts and performs it to achieve specific ploited. For attackers, anyone helpful to achieve the at- attack goal. Synonyms such as attack vector, attack tack goal is a potential attack target. And the attacker technique and attack approach are used to convey the might select multiple targets in some attack scenarios. same meaning. A common taxonomy in literature is The potential attack targets include but is not limited to divide social engineering attacks into human-based to: 1) new employees, 2) secretaries, 3) help desk, 4) and computer-based (or technology-based) [9–13]. Fig- technical support, 5) system administrators, 6) tele- ure 10 (right) presents 20 attack method instances, in which some methods such as influence, deception, phone operators, 7) security guards, 8) receptionists, persuasion, manipulation and induction also describe 9) contractors, 10) clients, 11) partners, 12) managers, skills frequently used in other methods. In many at- 13) executive assistants, 14) manufacturers, 15) ven- tack scenarios, multiple social engineering methods dors [14]. Similar to the attacker, attack target can be can be jointly used; classical attack methods that ex- an individual, a group or an organization; a real per- ploit non-human-vulnerabilities might also be com- son or a virtual human role; from internal or external. bined to perform social engineering attacks. Besides, As Figure 11 shows. there are many auxiliary tricks or cunning actions may be utilized in di↵erent methods to assist the attack individual (e.g. to obtain trust, influence or deceive the targets). taxonomy 1 group Figure 10 shows the overview of these categories and organization the corresponding instances. It is a non-exhaustive list internal Attack Target taxonomy 2 and it seems impossible to enumerate all the social en- external (Victim) gineering attack methods, since new attack methods real person are emerging as the development of cyber technology, taxonomy 3 virtual human role the evolution of environment and attackers’ creation. common target taxonomy 4 3.1.7 Attack Target, Victim special selected target Attack target is the party to su↵er a social engineer- ing attack and bring about an attack consequence. Figure 11 Taxonomy of social engineering attack target Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 8 of 20

3.1.8 Social Interaction and Attack Medium 2) behavior and habit, 3) emotion and feeling, and Social engineering is a type of attack involves social 4) psychological vulnerabilities. And the psychologi- interaction which is defined as the communication be- cal vulnerabilities can be further divided into three tween or joint activity involving two or more human levels: 1) human nature, 2) personality trait and 3) roles [1]. It covers the interpersonal interaction in the individual character from the evolution perspective of real world and user interaction in cyberspace. Attack human wholeness to individuation [15]. Following is medium is not only the entity so that the social in- a non-exhaustive list of human vulnerabilities, which teraction can implement (through which the target is contains 43 instances of these six categories. contacted), but also the substance or channel through Cognition and Knowledge (8 instances): igno- which attack methods are carried out. In some social • rance, inexperience, thinking set and stereotyping, engineering attacks, several di↵erent mediums might prejudice / bias, conformity, intuitive judgement, be used. E.g. The attacker deceives the target through low level of need for cognition, heuristics and men- phone to receive an important document, and then tal shortcuts. carry out phishing attack in the email. Behavior and Habit (4 instances): laziness / sloth, The taxonomies of social interaction can be various • carelessness and thoughtlessness, fixed-action pat- according to di↵erent criteria. It can be direct (e.g. face terns, behavioral habits / habitual behaviors. to face in the real world) or indirect (e.g. email), real- and Feelings (11 instances): fear / dread, time (e.g. phone talking) or non-real-time (e.g. email), • curiosity, anger / wrath, excitement, tension, hap- active or passive (e.g. reverse social engineering). As piness, sadness, disgust, surprise, guilt, impulsion, Figure 12 shows. fluke mind. The attack mediums include but is not limited to: 1) Human nature (6 instances): self-love, sympathy, the real world, 2) attach files, 3) letter, 4) manual, 5) • helpfulness, greed, gluttony, lust. card, 6) picture, 7) video, 8) RFID tag, 9) QR code, 10) Personality traits (5 dimensions): conscientious- phone, 11) email, 12) website, 13) software, 14) Blue- • ness, extraversion, agreeableness, openness, neu- tooth, 15) pop-up window, 16) instant messenger, 17) roticism. cloud service, 18) Voice over IP (VoIP), 19) portable Individual characters (9 instances): credulity / storage drives, 20) short message service (SMS), 21) • gullibility, friendliness, kindness and charity, cour- mobile communication devices, 22) SNSs. tesy, humility, didence, apathy / indi↵erent, hubris, envy. direct taxonomy 1 indirect 3.1.10 E↵ect Mechanism Social real-time Social engineering e↵ect mechanism describes the taxonomy 2 Interaction non-real-time structural relation that what, why or how specific at- active tack e↵ect (consequence) corresponds to specific hu- taxonomy 3 passive man vulnerability, in specific attack situation [15]. Given the attack scenarios and human vulnerabilities, Figure 12 Taxonomy of social interaction in social engineering it explains or predicts the attack consequence. E.g. theory and reciprocity norm explain why new employees (inexperience, helpfulness, 3.1.9 Human Vulnerability etc.) are more vulnerable to give up their username Human vulnerability is the human factor exploited and password to technical support sta↵spretendedby by the attacker to conduct a social engineering at- the attacker, who helps to resolve their network failure tack through various kinds of attack methods. This is first and then request an information disclosure with a distinctive attribute of social engineering compared certain excuses. Social engineering e↵ect mechanisms to classical computer attacks. For social engineering, involve lots of principles and theories in multiple disci- other types of vulnerability (e.g. software vulnerabil- plines such as sociology, , social psychology, ities) can be exploited together with human vulnera- cognitive science, neuroscience and psycholinguistics. bility, yet they are non-necessary [1]. A wide range of Study [15] summarizes six aspects of social engineer- human factors can be exploited in social engineering, ing e↵ect mechanisms: 1) persuasion, 2) influence, 3) and a skilled social engineer (attacker) can transform cognition, attitude and behavior, 4) trust and decep- common or inconspicuous human factors into security tion, 5) language, thought and decision, 6) emotion vulnerabilities exploitable in specific attack scenarios. and decision-making. Following is a non-exhaustive In general, human vulnerabilities in social engineer- list of e↵ect mechanisms, which contains 38 instances ing fall into four aspects: 1) cognition and knowledge, of these six aspects. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 9 of 20

Persuasion (7 instances): similarity & liking & • network intrusion, helping in persuasion, distraction in persuasion interception or disruption and manipulation, source credibility and obey to provide unauthorized access authority, the central route to persuasion, the pe- to information or systems denial of service ripheral route to persuasion, Elaboration Likeli- intuitive data exfiltration, modification, hood Model of persuasion, recipient’s need for cog- taxonomy nition in persuasion. fabrication or destruction infrastructure sabotage Influence (8 instances): group influence and con- • give up physical access formity, normative influence (social validation), to restricted areas informational influence (social proof), social ex- Attack ... change theory, reciprocity norm, social responsibil- Consequence confidentiality integrity ity norm, moral duty, self-disclosure and rapport breach / harm availability relation building. security goals controllability Cognition, Attitude and Behavior (9 instances): taxonomy 2 • auditability impression management theory, cognitive disso- ... nance, commitment and consistency, foot-in-the- carrier door e↵ect, di↵usion of responsibility, bystander breach / harm cyber- resources e↵ect, deindividuation in group, time pressure and space elements (object) subjects thought overloading, scarcity: perceived value and taxonomy 3 operations fear arousing. ... Trust and Deception (5 instances): trust and take • Figure 13 Taxonomy of social engineering attack consequence risk, factor a↵ecting trust, factor a↵ecting decep- tion, integrative model of organizational trust, in- terpersonal deception theory (IDT). Language, Thought and Decision (4 instances): re- • lation between language and thinking, framing ef- fect and cognitive bias, language invoke confusion: induce and manipulation, indirectness of thought and negative conception expression in language. Emotion and Decision-making (5 instances): neu- • rophysiological mechanism of emotion & decision, emotion and feelings influence decision making, fa- cial expression & deception leakage, facial action coding, micro expression identify and deception detecting.

3.1.11 Attack Consequence Attack consequence is something that follows as a re- sult or e↵ect of a social engineering attack. The at- tacker feed it back to the attack goal to decide whether a further attack is required. The taxonomy of attack consequence is similar with the taxonomy of attack goal, as Figure 13 shows. Due to the subclass name in prot´eg´ewill be con- verted to node labels in later knowledge graph, con- sidering the intuitive demonstration and data feature, multiple di↵erent taxonomies can be used to assist knowledge analysis. Figure 14 (left) shows the imple- mentation of concepts defined above. Table 2 shows the related concepts descriptions set as class axioms in prot´eg´eyet not reflected in the Figure 14. [3]

[3]The implementation file was submitted as supplementary ma- Figure 14 Overview of concepts and relations defined in Prot´eg´e terial (SEiCS-Ontology+instances.owl) for review. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 10 of 20

Table 2 Other descriptions of concepts (class axioms) 3.3 Define other descriptions in the ontology No. Concept Description Concept Besides the axioms descriptions for concepts and re- 1 Attacker EquivalentTo SocialEngineer 2 Attack Target Equivalent To Victim lations in Table 2 and Table 4, annotations are op- 3 AttackGoal EquivalentTo AttackPurpose tional to facilitate the ontology implementation and Equivalent To 4 Attack Medium Social Interaction many comments (a type of annotation) for instances (Entity of) are added in Section 5.1 to help the instances edition and knowledge analysis. Here three reasoning rules are defined for simple sce- 3.2 Define relations in the domain ontology nario analysis such as unique attacker, victim and at- Based on the definitions presented in Section 3.1,we tack consequence. Figure 15. The rule 1 is used to add extract 22 kinds of relations among the core concepts. a new relation: if 1) an attacker crafts and performs Table 3 shows these relations and their Domain (start), certain attack method and 2) the attack method is direction and Range (end). Figure 14 (right) shows applied to a target, then a relation ”attack” will be the implementation of these relations in Prot´eg´e, and created from the attacker to the target (victim). The Table 4 shows the related concepts descriptions set as rules 2 and 3 are used to automatically complete the object property (relation) axioms yet not reflected in relations that are not designated explicitly in the in- the Figure 14 and Table 3. stance data but have defined in ontology. This is useful to improve and convenient for the in- Table 3 Define relations among the core concepts stances’ creation. The built-in reasoner HermiT can be used to implement the reasoning. For complex attack No. Concept (Domain) Relation ( )Concept(Range) 1 AttackMotivation motivate! Attacker analysis, these rules might need some adjustments and 2 Attacker motivatedby AttackMotivation other reasoning tools can also be used. 3 Attacker gatheranduse Social Engineering Information 4 Attacker craftandperform AttackMethod 5 Attacker formulate AttackStrategy 6 AttackMethod toachieve AttackGoal 7 AttackMethod guidedby AttackStrategy 8 AttackMethod applyto AttackTarget 9 Attack Method performed through Attack Medium 10 AttackMethod toexploit HumanVulnerability 11 AttackStrategy basedon Social Engineering Information 12 Attack Target su↵er Attack Method Figure 15 Rules defined in the ontology 13 AttackTarget havevul HumanVulnerability 14 Attack Target interacted through Attack Medium 15 AttackTarget bringout AttackConsequence Above is the key material and ontology implementa- 16 Human Vulnerability take e↵ected by E↵ect Mechanism 17 E↵ect Mechanism explain Attack Consequence tion after the ontology revise and validation. The sup- 18 Attack Consequence feed back to Attack Goal plementary material will lead reviewers / independent 19 AttackGoal tosatisfy AttackMotivation researcher to reproduce the result. 20 Sub-goal subgoal of Goal 21 AttackMethod withskill CommonSkill 22 AttackMethod withtrick AuxiliaryTrick 4 Result: domain ontology of social engineering in cybersecurity Figure 16 shows the domain ontology of social engi- 3 Table 4 Other descriptions of relations (object property axioms) neering in cybersecurity developed in Prot´eg´e .The core concepts and their relations is marked inside the No. Relation Description Relation red polygon, the outside shows the taxonomies (also 1 motivate Inverse Of motivated by 2 incent SubProperty Of motivate as the labels) used, and the right area is the legend for 3 drive SubProperty Of motivate relations (the directed color connection in the figure). 4 incented by SubProperty Of motivated by To be intuitive and integrative, Figure 17 presents the 5 driven by SubProperty Of motivated by 6 incent Inverse Of incented by ontology in a more clear and concise way. 7 drive Inverse Of driven by Overall, 11 core concepts and 22 kinds of relations 8 apply to Inverse Of su↵er among them are formally and explicitly encoded / optional verbose relations defined in Prot´eg´e, together with related description, 9 conduct Equivalent To craft and perform rules and annotations. For this domain ontology, it can 10 exploited by Inverse Of to exploit be exported with multiple ontology description lan- guage and file formats, such as RDF / XML, OWL / Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 11 of 20

Figure 16 The domain ontology of social engineering in cybersecurity developed in Prot´eg´e

Attack Attack to satisfy Attack Goal feed back to Motivation Consequence

motivated by to achieve bring about explain

Attacker Attack Attack apply to Effect (Social craft and Target perform Method suffer Mechanism Engineer) (Victim)

performed gather and use formulate guided by through interacted through have take effected by

Social Attack Attack Human Engineering based on Medium to exploit Strategy Vulnerability Information (Social Interaction)

Figure 17 The domain ontology of social engineering in cybersecurity

XML, Turtle and JSON-LD, to reuse and share the presented in Section 2, this section evaluates the do- domain knowledge schema. main ontology by its knowledge graph application for analyzing social engineering attack scenarios or inci- 5 Evaluation: knowledge graph application dents. First, the ontology serve as a machine process- examples able knowledge schema is used to create the instances, The best way to evaluate the quality of the ontology generate the knowledge base and build a knowledge developed may be problem-solving methods or using it in applications which reflect the design goal [2]. Corre- graph. Then, 7 knowledge graph application examples sponding to the purpose of the ontology development are presented for social engineering attack analysis. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 12 of 20

Table 5 Material of social engineering attack scenarios / incidents adopted from [1, 15] and used to generate knowledge base

No. Social Engineering Attack Scenarios / Incidents Description Human Vulnerabilities Effect Mechanisms Pretexting. The attacker attempts to elicit classified or sensitive information from victims (e.g. tele- phone company operators, motivated by using telephone service without payment) by pretexting via telephone. (1) The attacker makes a prior survey to know better the lingo, organization and victims, Credulity or gullibility, sad- Social responsibility norm and moral ness, sympathy, the desire to duty, (similarity liking and helping), 1 and pretexts to be an inner staff (e.g. who is in a trouble) or technical support to elicit information. (2) The attacker requests classified information by pretending to be a cable splicer and pretexting that be helpful, agreeableness, emotions and feelings influence deci- he is wiring two hundred pair terminals for police. Who would want to refuse a little help to a com- kindness and charity, inexpe- sion-making, ELM, IDT, factors af- pany man coping with that heavy-duty assignment? She feels sorry for him, she's had bad days on the rience. fecting trust. job herself, and she'll bend the rules a little to help out a fellow employee with a problem. Shoulder surfing. The internal attacker (for security test) pretends to be a maintenance worker to Carelessness and Distraction in persuasion and (get access to the target workplace and) contact with the victims. When the victim is not paying at- thoughtlessness, credulity, manipulation, IDT, factor affecting 2 tention, the attacker collects information such as username and password by surfing over the vic- gullibility, friendliness, deception and trust, peripheral route tim’s shoulder, snooping prominent places such as sticky notes, papers or computers. ignorance. to persuasion. Foot-in-the-door, impression Vishing and Pretexting. The attacker (e.g. motivated by financial gain) pretends to be a new em- Guilt, sympathy, the desire management theory, two routes to 3 ployee and convince the targets that he will suffer greatly if the request is not granted. E.g. request persuasion, IDT, cognitive disso- the technical support (e.g. Paul) to reset the password of certain account to deal with an urgent task, to be helpful, friendliness, credulity. nance, ELM, emotions and feelings and further ask a VPN to access from outside. influence decision-making. Vishing and Pretexting. The attacker (e.g. motivated by financial gain, intellectual challenge) calls Source credibility and obey to author- a staff of the technical support department to say that the CEO authorized his requesting an urgent Fear and dread, conformity, 4 ity, diffusion of responsibility, by- VPN channel for a project presentation in another city, and further tells he / she that other staffs did neuroticism, the desire to be stander effect, deindividuation in group. this before, such as Paul. helpful, credulity. Group influence and conformity, so- Manipulating conversation. The attackers (e.g. motivated by fun or pleasure) induce the group Conformity, agreeableness, cial validation, IDT, reciprocity norm, conversation to a security topic, one of the attackers discloses his password to discuss whether it is 5 extraversion, credulity, cour- selfdisclosure and rapport relation strong enough. If most of the other participants (or attackers) also start disclosing password, the tar- tesy and humility, diffidence. building, social exchange theory, cog- gets are likely to be manipulated to disclose password or other sensitive information. nitive dissonance. Peripheral route to persuasion, Piggybacking. An authorized person provides access to an unauthorized person by keeping the se- Courtesy, humility, credulity, (similarity liking & helping), distrac- cured door open for providing help or other reasons. Most employees do not know every colleague openness to experience, the 6 tion in persuasion and manipulation, at a (large) organization and will hold a door open for politeness, let alone the attacker is nicely desire to be helpful, friendli- IDT, factors affecting trust, facial ex- dressed, shoes shined, hair perfect, with polite manner and a smile; victims will less likely to sus- ness, intuitive judgement. pect. (e.g. motivated by espionage) pression and deception leakage. Trailing and Impersonating. The attacker (e.g. for security test, personal interest) pretends to be an Helpfulness, think set and employee of target organization through suitable disguises such as uniform and printed badge, gain- stereotyping, heuristics think- ELM, peripheral route to persua- ing access to an establishment by following employees who have security card (under the cover of ing and mental shortcuts, intu- sion, distraction in persuasion and 7 lunch rush at a large corporation). The security guard and employee see in the eye, but he has accus- itive judgement, apathy, indif- manipulation, level of need for tomed to it. In some organizations, the lazy security guards put the access card on the desk for those ferent, Ignorance, lazy and cognition. who forget bringing the access card to pick it up for themselves. sloth. Baiting. The attacker (e.g. motivated by competitive advantage) leaves a USB stick containing mali- Curiosity, excitement, greed, (similarity liking and helping), ELM, 8 cious codes in a location where it is likely to be found by the victims. The outside of the USB stick is conscientiousness, sympathy two routes to persuasion, IDT, emo- the logo of the target organization or attractive icons to lure the victims to pick up and insert into or the desire to be helpful, tions and feelings influence decision- computer. Once inserted, the malicious code may execute automatically. inexperience. making. Reverse SE. The attacker (e.g. motivated by espionage) sends an email using faked address (technical Reciprocity norm, impression support department) to a new employee informing he / she that "a network test will be conduct recently, and management theory, commitment and if there is a network failure, please contact xxx". The attacker makes a network fault and waits for the new Inexperience, intuitive consistency, framing effect and cogni- employee's request. After helping to resolve the problem, the attacker says sincerely "Would you like to do judgement, agreeableness, tive bias, language invoke confusion - 9 us a favor, just one minute, that completing a survey used for developing a security awareness training pro- ignorance, credulity, con- induce and manipulation, group influ- gram for new employees; nearly 80% of the employees have already done this." "Ok, my pleasure." "Are formity, the desire to be help- ence and conformity, diffusion of re- you aware of our email policies? ... It can be dangerous to open unsolicited attachment ... We need to know ful. sponsibility, factors affecting trust and your password to evaluate the security awareness of new employees. It is a secure matter" "Okay, it is ..." deception, IDT. IDT, peripheral route to persuasion, Phishing. The attacker (e.g. motivated by financial gain) sends phishing emails with faked address to Excitement, happiness, greed, distraction in persuasion and inform targets that there is a very low discount coupons of food (or sport event ticket) in a limited gluttony, surprise, 10 manipulation, emotions and feelings time. The email contains tempting food pictures (or passionate sports posters). This lure the targets to extraversion, impulsion, fear, influence decision-making, scarcity: click on malicious links (with encoded URL address: att.eg.net), divulge privacy information, etc. intuitive judgement. perceived value and fear arousing. Spear Phishing. The (fired) attacker (e.g. motivated by revenge, financial gain, prank) finds there is Deindividuation, emotions and feel- some resentment between employees of the target organization through text, images or videos in Disgust, prejudice, anger ings influence decision-making, 11 SNSs, and sends SNSs message or email embedded with malicious code to selected targets, claiming or wrath, hubris, envy. neurophysiological mechanism of it was a hoax virus that could be forwarded anonymously to someone they didn't like. This may com- emotion & decision, micro expression promise a large group of individuals in the organization. identifying. Smishing. The attacker (e.g. motivated by financial gain, competitive advantage) blocks the target Source credibility and obey to author- CEO's cell phone signal and sends SMS message to his secretary by faking the CEO's phone num- Fear and dread, tension, ity, time pressure and thought over- 12 ber: "I'm in a meeting at another city and couldn't talk on the phone. Encrypt the organization struc- neuroticism, self-love, loading, emotions and feelings influ- ture table and a contract file to a zip with key *** and send it to [email protected] immediately! Other- credulity. ence decision-making (fear-arousing wise, we will lose an important business." in persuasion), IDT. Trojan attack, honey trap. The attacker (e.g. motivated by financial gain) puts software in website IDT, emotions and feelings influence Lust, greed, excitement, and implies it is free for downloading and watching porn images or videos. Text marked that "you decision-making, peripheral route to 13 curiosity, impulsion, intu- persuasion, distraction in persuasion and won't see the seductive images If you don't act." Once the targets opened the link or installed the itive judgement. manipulation, indirectness of thinking software, the attacker's computer or mobile device is compromised. and negative expression in language. Water-holing. The attacker (e.g. motivated by financial gain) finds that the targets usually, regu- IDT, factors affecting trust and larly, will or are likely to visit certain websites, and then infects these websites with malicious code Fixed-action patterns, behav- 14 deception, social and organizational waiting for the targets' trigger. The targets will be compromised e.g. when visit the websites, down- ioral habits of site-visiting, trust theory. load software (malware) or click (malicious) links. think set and stereotyping. Whaling attack. A spear phishing attack directed specifically at high-value targets such as senior ELM, the central route to persuasion, executives, CEO or CFO. The attacker (e.g. motivated by financial gain) craft the whaling baits such Heuristics and mental short- the peripheral route to persuasion, time as emails and websites are highly customized and personalized, in which the target's name, job title, cuts, intuitive judgement, 15 pressure and thought overloading, fac- job responsibility, internal phone numbers, organizational logos, email footer and other relevant infor-carelessness, thinking set or tors affecting deception and trust, inte- mation are incorporated. And the attack is usually context-aware, e.g. "... the xxx business meeting/ stereotyping, credulity. conference in your schedule needs you to register and confirm the registration using the attached soft- grative model of organizational trust. ware ( or back door with encoded domain address: att.eg.net)". Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 13 of 20

Figure 18 The overview of the knowledge base generated in Prot´eg´e

5.1 Create instances, knowledge base and knowledge Due to the limited functionality of Prot´eg´efor data graph analysis and visualization, we select Neo4j (community- An ontology together with a set of instances organized 3.5.19) [16] as the tool to display the knowledge graph by the knowledge schema defined by the ontology con- and analyze social engineering attacks. Neo4j is eas- stitutes a knowledge base, which further serve as the ier and faster to represent, retrieve and navigate con- data source of a knowledge graph. For this paper, a nected data. And the Neo4j CQL (cypher query lan- dataset of social engineering attack scenarios that con- guage) commands are declarative pattern-matching, tains the necessary instance classes such as attacker, which is in human-readable format and easy to learn. victim / target, human vulnerability, social interaction There are mainly two steps to migrate data from (medium) and attack goal is in demand. Yet there is Prot´eg´eto Neo4j. First, export the ontology and in- not such a public dataset available now. Thus, the at- stances in Prot´eg´eto RDF/XML or OWL/XML file, tack incidents and typical attack scenarios described in with the reasoner enabled to infer and complete the work [1] and [15] are adopted and expanded as mate- knowledge according to the axioms and rules defined. rial to create instances for each concept defined in the Then, import the RDF/XML 4 file into Neo4j by ontology and build the knowledge base. Overall, 15 at- the plugin neosemantics (version 3.5.0.4). The detailed tack scenarios (Table 5) in 14 social engineering attack scripts and commands used to build the knowledge types are used to generate a relatively medium-small graph is submitted as supplementary material. size knowledge base. The instances and their interrelations described in According to the statistic in Neo4j, 1785 triples were every attack scenario are dissected and edited in imported and parsed, and 344 resource nodes and 939 Prot´eg´ealso, since it is convenient to check the data relations were created in the whole knowledge graph. consistency and revise errors according to the ontology. Figure 19 shows the knowledge graph consist of all In this process, we add many comments (for instances instances nodes and their interrelations. The legend of attacker, attack method and victim) to assist the for node color is in the left bottom. instances creation and knowledge analysis. Figure 18 In the knowledge graph, the relations craft and per- shows the overview of the knowledge base in Prot´eg´e. form, apply to, to exploit, have vul, bring about among A total of 224 instances are created in the knowledge nodes attacker, attack method, victim, human vulner- base. [4] ability, attack consequence are colored with red, to ab- [4]The implementation file was submitted as supplementary ma- stract and denote an attack occurrence (Figure 19), terial (SEiCS-Ontology+instances-inferred.owl) for review. for the convenience of attack analysis. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 14 of 20

persona…

habit_in…

job_resp… job_title… interper… red color edges: craft_and_perform e y

t g b

a _

va

i t d

h t to_exploit e

t poests_in… o r

_ m va

a i

t n

apply_to o d

_ informa… m u

g s IT_infras… hav_vul a e block_m… t

h e

e s e r u _ s _ t bring_about a u o d _ n organiz…_ a d n d ch _ persiste… a n i e _ u a e s r ve s u _ e e _ targete… e r location… s prank d g h e u n a t h _ a t a t a d _ h g r n g e g e r a a _ se m _ h t u r t h a e _ o a e n t d to_brea… t e a n i g r_ dl e_a revenge va h u rs routines… t a m _ e t n u h u e a r t m te d o s a _ m d g a f g l _ e ef o o _ u us d g m ti t b m u nd_ o r vai r s r_a n a tva y fo e he r habits_a… o e t a f ga m d_use t r t nd_usaeth_ er_an h e e ther_a g r ga u e p e … attacker7 r _ m l _ o attacker… a h _ d t d a tiva t n e g a n n a te a a e f _ craft_co… d tg s o d g h _ hobbei_es… u r ft craf b r _ _ m _ short-te… t_and_ y _use a a a perform ather_and e nd d u u r g t n g _u las c h m h as a t r_ e ete t attacker… e h o a e e g e r h r a t g t _ _ s t i a a h a va u g n t e f d e o m n _ e r t r o building… _ _ a m t s u a l o d _ f d u r r e s n s e m u e m p m _ o dn d r a _ _ o u _ o a f d t _ d u t n u t i _ t a s b e f s i _ i va r o e f o s n ft m l e va attacker…s r gath o a y m er_and_use _ f a e a r o u l_ s y c t t m t t _ u i e e h _ l person_… a a k va e r r a c d t e t network… t a t o d e s e o t e _ a i t d f n h g s a _ r b u b y g t _ attacker2 b f ye a m e u y m _ a d t r tiva s p g o n o … _ o e m g k t t a d i h i _ ga f c va t va r t o a a n her a e _a r p t a t e y n m t _ e m h t _b d_ g e s d u p a t t fo g te use f o a d ua muivaltiple… attacker… l m adjacent… rm_ t ot la y water-h… a t g m s _ d h o r i t _ t b u e u iva c secuvarity…n l e te g t a a r iva a f k y t o te _ t _ o th f c t r e a m e e a e … t t e n g r to d e h t a a _ m t _ y a t r … a u m va b c g s r i _ h h o h _ o t e e d e SNSs i f g d fg o s e r y usual_st…e ve to_steal… u e r raelax_tar… _b t phr one_… u d m _ _ e o p e t e g t d r e a mt m e p h m iva va victim14… p h o_sa t ti n r tis r a r o t fy r _ e t n a _ _ f m a _ o o d tr o r d f f o o l d d h _ e i e r r r n f mu a r _ h n t g n e e m financial… a e t t c n u organiz… _ e a a e a p p te ed_byt r m a tisfy e vat d g g r o_sa oti s t _ r _ m r _ a f p l t t e d a t _ e a m a i a t d iu _ to_ o o f h c t f n _ r k u satis ti f o n m n n fy va t to_com… e t d t i o i r c t a c s e f y h e l a y o t h competi… o r b d h r f e d o __b _ e r a n to_sa r mr l t e p d i d sf _ c t y m t e r o _ g e _ t t f t t a u p x a a r t o iva va u u _ u u i h t a c a to_get_c… l _ x h t t e o g fun_or_… l n a _ … trailing_… r o te s a intellect… attacker8 a r s m f o t a o e _ e m l t a t c u h e f d y l r e e u _ x rm emaa il_in…o e n _b o o p ve f d d r ivate _ e h t h o g e o t p m r … t ve t t _ c m u va h r vu l y e ti f g _ t e o _ _ o f o s victim7_… s b y m sf e s h i u m r u vu i

t t e m d i i o a u r h r t _s va e h l t f k i t c o s applicati… ve o o e ei a t t _u a m i o _ d c va n t o r n o d ve c a a r l a er_ u y t ti va r d o m t h a _ t e a e g t a e a s f t e t th m t g g e m h _ o t _ _ d va n h s _ _ t r iva b f i _ vu t t t r h i y o _ o t _ a o u o o f a e a t m o a am _ s t victim11… u t y f _ o l c c o l d b u g t t a a a r t h i _ r _ h reverse_…b e e h o s l d o _ i l o t n e e _ t attacker4 n t e e l _ k i _ a a d

u e ve attacker… t shoulde… _ c p p h a d i o o r c vu y m t t s ve x _ e va a c r t f u e a _ g i h e o m p a r f t t i f l h o s e _ b t a n b attacker… h f i o g e r e o ir t g a g p t t _ ve t u t i o t t t a e va t i e o a h e o t va r s x s t e e a a d ve ve i u p o t f r n o o _ t _ _ l e o s t m a i l _ to_com… e r t e h _ s o d y _ d t nd n a _ _ s a _ s _ _ d a g p f a _ _ o _ k e a s a b r ulingoer c f vu x n ti s s a o y vu m l c o h a e p c _ a t t i t t t e t s a a t i p t a m k b _ using_fa… s g i t e l a ve _ o s f t t y vu w s f t y l _ l f ith _ y f h f r e o _ y d b _ ve tr f i o fixed_ac…t i t _ ve ick o o s o a s ta i t ke y e i n_effect_by _ a t r trust_an… n e ptrogres… vu _ e _ behavio… d h c influence f r e _ ve u h i us r a y … h x m a _ g c pretend… nd s f e a r_a _ ve o a p h e _ o t th c a a a ve _ _ e i h i g o l ve c u organiz… n e f _ b l h t a whaling a a o n h t i o o e l a l e l t h o a a i x b r _ p g i d a h p la sapear_p… i t _ e se in b c t _ vu t x i victim2_… y t d_u c n o veu f a o _ t gh o eve er_ vu s e g h … e u l s h h r ro thinking… o o at t u h a h _ i g u t d t l _ _ ed_ m l t d ve _ n a t e _ p a _ e hi rm o _ attacker6 s c fo t i r … t rm to_a er h t p a p ve p k vu o r x g ve e rf o r o t t x ve e o f o o u s c e p t _ attacker5 attacker3 e a u b r e a e t _ x a _ c i o a i

_ h h t g _ t l r _ o i n p _ o e e to_ieobtai… l r o b l visit_ma… h g h l h p p o vu f ve p t vu x x _ a ea p o lo x t h t _ e t o e i websites p e u to _ t t d na r _ e t ve f _ vu r k d r l _ to _ e to_instal… o l o m i _ r e l m l t rn h y ve l e d a e n i m l i a a d f _ _ apathy_t… n s m a r h o e e o a c e f avoid_p… vu t r l _ e p e vu t d o u f vu i d t e d f o t ve craft_m… _ t t e o f a p f _ p f d t o _ _ l _ e i r t l t w m r c x l f u va _ p m l k a h a _ r m u m t x t e t i o vei _ d i d i n r t b _ _ r r r e a ee k u t o m b p p te o a h e o s ve _ c s _ n k o _ laziness… y vu fy r … b o s to_ob_tai… _ u i k u s e t y _ n y prejudic… ve o e l a _ o h f t f o sa s x _ e c e t a y to a a t o g d t c … r e r f b k e r p t _ t e h _ m _ o l n d t h k i p b _ i f n x i a i x e a r e _ _ l c l h r g t e d k _ oa e_ t th t a l o t r h t p t w l p e e lowo f_aleve… r t b n _ i c d _ r _ o o p _ o s e e e c t l f u _ i a va _ e m ft k c d f r e g a l t f o f e t ti t b a p s y a f h e d a p o s h h t e carelesr s… i i a b t a e e f d b c e _ e w a _ e t t r k t f y n y i u _ mobilep… x f to_get_c… t l c t g k t n i e o k d a _ o c e i l t a u a e f d spread_f… d i t e _ t t l e u t _ c _n _ t y l o ok t i _ to_get_a… a u d e n e p t o _ b k h t g t d e to_reset…t e _ n g at b b _vu _ c o p h _ _ _ _ s o f n g o t h i n u t e k a _ _ ve vu h o ey y f t e _ l ro t s a x t a m t _ a h a va r e r f t h o t c r a i _t i t d s a k a _ r d f n ve x r _ f _ t _ a _ e ve a n ca o b e e t m c p h r f i hi e p k hr e y c orm r o k b e t f u e e l c n t r t c e _ o _ a x _ ve e e a e l _ _ n i x a a p e f o l o e f i _ g c et t t p o _ k f m e f a ve h c e f d e f t t k h o ve h l t t _ email e e o c victim15… a o f r a h n f _ ie _ t h d e f n _ t t t o a t f t b e m h _ a g i _ r a e t e e k k ignorani … _ va b _ a _ t t c face-to-… c a g e f i y k f i c e e e y t f t vu h r u t c u _ _ f h p smishing y e c b e k n o _ o hubrd is x t e h _ c b f ie r g a g disgust o w e _ pe o u a o provide… t e t a y ve rfho t e f t o r m pa l t m o e c n r a c _ ed _ c _ _ f th r t d f thr m r x o k m o t _ _ d u r p k d n e t c _ l p gh l t f e a e h r e t l a _ a p f y c b n … r t o a o e rm b f y t h c i e _ k _ y h o t h n c f e t a e y a a r x e t f t e i ee e _ b b _ o r a b n pb a _ p t t n y … e r t i e SMS_me… r a f n rf t f t d n e _ _ c o p _ _ _ f e e rmf espiona… in o k f k e t f r e e y _ f _ t f ed b k _ _ _ e e e a t d n ee e c _ attacker… ve h e r p b u _ ta c n r kou e y f fx o e e y gh k s h n e ephishing c c k c c manipul… d m u ug f f n f o i envy a g h t e y f p a fy p hr t f e e t f t is c o t b _t t g b t t d a f y a l c e l _ b _sa n t _ g cte c h c _ c _ u e h t to a iva o o ra c f o e k t t a o b b y _ h _ int p a t _ t a a _ b re t t t i b t a g t t d _ x a _ k b _n e _ _ l e _ l t h o t _ cve l _ r e _ t b y w n t u aki e o y f n t t it i t c s t a manipul… e y h _ n a _ i o n b l_ask eo c e _ h_ _ a t f d f y e p ill f vu o it lo x e u x c e a f ill _ t e f y w p kc a ith_sk k y e e w e k n f to_pr ene… w t e t x f l ll x p it _ e f i t c bringe_about victim12… attacker1 … o e c h f c hp ha euristice… b h sk i _ a _ e i c _ f s e f _f _ h o l a t _ t h t n e deindkivi…… k t n o t r r b l t i provide… m a o i p l illec f _ _ l t i t f n i n n w reverse_… f t f k c e a e c e o i a t t e k a t t a n_ e n t p e t c i _ ke i e i e t d h to t e _ n ta a _ i t a t a a k ve va anger_o… ken i _ _ n c e x o l n _effe b _ k c ve ve ct_b b a a e r _ … y vu t a g e t a r e e b t p a y l n e induce f y g a r _ e t ff t_b _ n t k _ t x de _a c m h c e y _ f i eeu peripher… m u a i _ n f f p a t f r a _e c t n … e f o l e b d ni t i e ta e r o _ t ve e s f d _ x t vu l _ y p k _ x a k c y a o e o e _ c … i t b h trojan_a… b r t k n o e p p o _ e k x ve ct o y p ve en e o _e n r p a o _ new_em… f f t h x h e p f l e t f se f _ e i l c vu nd_u f _ b e o er_a e a _a c r a e gath c t _ a e _n r h e i r l n ta n b _ _ t e l k k e n i e p t e t y f f e a k p n n l a _ i o t e a i t e t o l s decepti… t a ffx o e d byso tand… h n e t t attacker9 x ct e e x p t _b p r _ _ h t _ i p e y t o i a ve u t g l m u i o r f vu risk_and… e p n l _ y l x o b a _ r l k a d n _ _ p h o l o f h e _ o t _ f i t w _ l e e r e _ l _ i k c t d t p a e d e y f l e t h y ve o p a y o … b p a a x it m t t _ h t l e p l r t x t c t a e k _ x c p h fo t b x e t o vu d i x x r victim10… e i c ve r f _ h x o h e neuroph… t _ a r p t _ p n t p h a d t l a t r n p s e f a e t e y i credulit… o o _ o e t_b x ve n t t o e c _ i _byeffec a source_… k taken ecetn_ _ o _eff e k o ken_eftfak a t o a ect_by n ta a _ t e ve o a _ l k e l b l r ei i k i t _ f _ u t u p h l u a to_obtai… f t b it t f a _ p l toa o x f e e recipien… ve _ xh e lo o g _ ha c neurotic… _ k p e t e g p vu ve x s l _ _ e c factors_… e a _vu t o e g ke p i i _ y t h k a l l e t t _ t n x d tro x x t vu e p _ _ m n _ n o e p i n t b p _ _ o c b to_e t have l l h n x h r x f r o p x _ o e l i vu e _ o t n n i t ae _ y l a vu ve u iy l a _ it software p e o ta l a … o to_eexploit e y e… h b k l a in o l _ _b o tera t e … pige gyba… _ _ friendlin… f ct _ cte _ x _ i ve p b e g b d ffe t mo d_th o a n n i i t f n_e victim4_…r o f ro k y x c e a r t ug p t p f _ ak l h l e lep b ei t l e o t o a f e _ o l _t_ h f h t u k l o l r _ e c o f t ve f c o xf t x f e vu e dh vue f a e _ p e f f l c s r t i e i vu f n v… e e _ l f o ve n t e victim13… l l n e a a f brtin e t _ u a a h e a e k t x g_abo_u e l e n l y i e w _ x t i x e e _ c _ a ake c b t _ ak i oe ve to _ explkoit p vishing_… p _ i e n p _ _ p e t t y _ t lo k t vu th tf _ o to f e _ e _ p c ve t k b explain o e e t e o int_stall_tr… t xp t _ y _ _ x h fe nd f _ x e e _ l f _ s s p l f a o c _ time_pr… _ _ et o ht vu e ec o k h fear_or_…x c n_ d a b f t l c l c ve f b e x b k o k h i vu t h _ f t _ ve l y ta e _ t vu_ e f e h l i o o e n l c o m _i b l y _ f e a ex e p _ la t p_ _ _ n t b t yh n c y e xp n plain h a vu c e a ta t b f t r a l ve _ a e _ iai x l t w p x t f f vu i _ n n h ve l l y e o ve gluttonl yt ak t i t a ve a h p a_ a o a t y t e n _ c p o l f ve ve a p i i o r _ c _h _vu a y_ e l vua in in ha b t ay a it c t tr m _ w f l n ve o p i e a l vu ak _b b f k l l a e _ h i by r i o e _ inr t th_t b t r t a p p a intwerper… l f ten_ sio_n _ h fe t t b f ic e t … _ c y e b a t _ uf o t e k ve h t p f s _ ve _ distracti… x x s c _ indua cing p p y e e i e d vu l ta t a nf c inturitxia ve… x e vu o s f r w f n t e t k f t t ve _ o r f p e _ vu i e f a r r i _ e e e _ k a t ne yk e x i p o e o c ve _ e t _b ha a _ a e c l y e da veh ve vu n _ xect g ve_vu ll_x in _ e c c e _ l f l e n x h e f x p _ _ e r r i t a f e e e l k r n t t o k b y n_ p k n f e e a b vup m ri a a f ke _ k e e l i a p e a h c p f l l a l h t e t l a e x a e f _ t o n a k l d n _ i b e t to s c c y a b e x p t u _ l _ o c f h t _ y n l x a p t n f ply l it f _ e a h t b i i a victim6_… px op _ t d _ t n k victim5_… impulsi… t t_ n _ o in e p t o c a vif shinyg_…h t d _ c a a x i l e e b r h persoua_ si… greed c e ul l a y _ l e n a b ff e i p n a_ ea a _ t p a o l m t r t l _ _ i a e_e p t x n la a k have b t e e f_ r vu i a o t a name-d… ve y nf x e i f e e f _ f e n e i i g k f y e vu n _ c _ _ u h k a e i n lo i x f n t vu f n f t to b n t n l n o ve e e x _e e ou n t y vu t t i e tt e t g wr _ t p _ e l i x f i a _ f t b t h t pel a b ve e r i f spelf-love o _td _ n _ oit g_ _ e el c _ a a e o r _ o h ric p c nx f _ t h e b o e k ri a c l f d f l c p a diffusio… b e x ve a r vu _ n e c p e e o uk ve k u e s f p t x f k _ t n phishin… _ e t l f l f k y pf f c p _ r o f k i x x e a t g ea t e _ c r ae t f p b fl _ i a a e x a m p f l c a ct_ na t e t e c k k _ e e t h e _ f o p h _ t t e l a e ct e o _ t x _ a e e i t l k i _ k k n xploi ea e h d e n t a e to_e t c l a n n baiting f k a _ o bx x h _ a n b f t o t i y t c e i x a k p b p _ n t l a _ e t k n y p o t excitem… k n _ be e n i t e t h r a e b t _ n t _ u _ t p l a extrnaver… h l o y r o r create_u… _ e i t h l _ _ ely aborat… x n b eb _ _ o y y _ l o f t t h _ n e e o y i a keeping… n t b n b e l p _ b e k e a p a o k u o ec y a e a y k f y _a b victim9_… ap t t l s g e t d _ f to vex i x e f y f ve _ y e _be g f y i ve o _ h f f _ xp s _ o victim3_… lust f e f ea c _ inloi _ p u t lo u th r e e b t t f i c _ f e t t t r t h f Attacker(15) e vu o e _ b _ b e f p _ i i m t t _ f b e e _ w nt _ c f e a _ c c g c lvu x n k b t t e t e f y o _ a r evu e t racte t x f d e e y _ a l x e e f e fe e t c it _ y _thro n e p x x u c e f f h k _ h b k t plap f t i l c t ve ut o l b gh a i l c c f t _ f i t e l u l o p e a n t t e t ta f e ve y lo e x t o_e bo o e t_ _ab o t t f f integrati… a y _ pn i e x t l t _ a e i g l i c ke e f vux n p t o e _ f in l n_ c f _el _ x l p avu a _ n xipng _ h f lo b cborn_sc cieh… ol o e s e b t o a _ r _ p l f s u happine… ffe _ t e e p i _k _ eb b lo l p a vu a f f c b a t ve t _ e nl ei n it n e e vu p pvu i e u fe Voice_o… s_urprise y f b i x y x pretend… b _ l a a k p x_ t r y t f a e nx l y ve e _ _ _ t _ ve p a e _ _ l evu f ake c ta e i n i p x l x vu _ y vee n y n e t _vu b _ a_ f n k n n k _ l e _effe e nen c i na lo have p to n ta l to ho _ k e ctf_b _e t h _ e a _ ke t e e n a y e f… _ a i t y _ y o a r f b e t vul ve evef t oitt k l x provio de… k fec tl i xpl p Attack_Goal(20) k a t y e y b _b prete_xeti… h e xplaf in a y o t e p a vu t b g _ a ain p f h i l _ t t o t _ p l disclose… o t viea cxtpim8_… x e t t l _ loi l i t u t h a p o n l px c fect_bhyave_vu emotion… t l h t _evex i eo ly p r tae ken_ef explain l c a have_vu c u toa n fl _ h n a l o k pf t e e a t _ _ t h e k i b ep e curiosity i i e a vu _ex d n h _ provitde… y_ e r t a a n ve x e x f n a r ve _ h o g a x o t e l _vub n exp nb e ee m l f helpfuln… n c p l _ i ain ri _ k _ f r x _ a conp form… n y h a t p o p e _t e b t t e o f i p _ o … r l x a c ve d f e t vu ve t o f y a e l e u _ a a l b x k l _ a e l e p e to _n r h a n_e f t k_ex r o p f o a a t p f t m f n ve lo b i e e l t p i p f h i indirect… nr e ct o l o f _ t o Attack_Target(15) _ i x g n b e _ r t b f _ x y i h n te ake e p i inexpx eri… n_ef_fect_by t x vu o r t r n t e i m u a e t l a a k a p u e t in e e _ n _ group_loi… g eap o

c a o k f e g x k c i e p a e vu t l h l d rl p g k to_ex e df y_ po retend…_ x e y t p t t a e lo t t _ n l b it o h a i i u _ _ c a … f pp i h t_n e t n n _ t n t a t i i s t h a a c t y f n x l _ o h y _ x l k a ve e r p t l i e f l e … a u b l _ f l e r e n y a e guilt t scarcity_… _ e i b h e b b o e opd enne… _ _ n r o p ve h i e e vu r f p preteaxti… _ _ p o o n i a f _ a e n fb e n … u x g e l g x c p c f ve x c k t g h e t _ _ n b a y t … t a i Attack_Strategy(7) p u _ f t b e e a o to_gete_i… b c m e t p h ve e y _ x y o a e _ e r o n e hub vu d _ l b r framing… c i at e t f a _ p c _ … i a r _ _ o k ve o make_a_… o help_res… x o a t t f f yt l t r c e t p m f h t l _ l a h y courtesy… f i x g c n t l vu a e p i i h y e f c b e r t n _ _ e a p e a l o o k y r d t b l l _ f x f t t e _ n e e p it o _ i b f l p e _ e e x b t i i p _ x x b … a f e _ c u e e o pd n x t e _ n _ ve r e l _ y l y a _p f o n e t g _ e t c i t _ e t f nb n p i s t b i _x n e e f o c e h u cn eent trl al_… n e t x l o c y t n e p e p b a t e i _ Attack_Method(33) c ex kc f _ a i k a_exck e f g p x l i f o soc_iba_bel_re… f o l aee t l _ f a e a k d a a in t simile arit… tf n n p k a fee o_n y f e a f facial_ex… a el t to r a p f in x t e c h l_ k t y e a telepho… o x f _ e c b e e k n l e e k e _e k t f a _ p k t e _ _ … g t a _ e e p i n b n t nx e e txo fa t b rac x ft e u n n t e i tak _ a n int at n e p _ e e _ c i insert_u… _ l l a k n_e n t t p y havee_vul e y o a k ff k e sympat… _ e e x e e b e a ln c r t in ft p en t a at_b_ c t_ l n f k victim1_… o a i p y c y a f h _ a f x f _ e t l l e n x i e e e e a a p k e n f ff i f e_vut t c c l p f e b n k ve _ h k f e a ex et e a fe f _ a h _ t i n_ c f n _ t d o e l t a e lc n ve b i e e t Attack_Medium(12) _ disclaose_… k _vu s _ n bt a e ey _c t ave t e u get_a_re… y agc hreeab… a i y _ k b f n n t m g c fe _ b e b a r o h e _ f t b o a i c t f t y t ex k c i rf l_ e _ p a y e o ve b la t e lo e f network… kindneins… f _ su eyx e f p diffiden… p bg cognitiv… x n o pla e pel x ffect_by al_ Effect_Mechanism(33) i x _ ai e … e aken_e of n p n f _ t ny k n l _eb e o i a a create_a… a e ct _ t l normati…t USB_stick p in x fe k x e pf a n e xpl e t l ai _ la e n en i vu su ak n k _ b to_provi… impressi…t a go t comavemit… network al_ Attack_Motivation(10) h of informa… languag… subgoal_of trigger_t… Attack_Consequence(14) self-disc… reciproc… convinc… foot-in-… Human_Vulnerability(43) social_e… sadness Social_Engineering_Information(22)

Figure 19 The knowledge graph generated in Neo4j

5.2 7 knowledge graph application examples A case in point is the knowledge graph of attack By virtue of the domain ontology and knowledge scenario 9 (a reverse social engineering attack) as Fig- graph, there are at least 7 application examples (in ure 20 shows. The left part (of area 2) depicts the 6 patterns) available to analyze social engineering at- contents surrounding the attacker: the attacker9 mo- [5] tack scenarios or incidents. tivated by espionage to gather and use information about organization structure, new employee and email 5.2.1 Analyze single social engineering attack scenario or incident address; formulate reverse and progressive strategy; The components of a specific social engineering attack craft and perform (red arrow) multiple attack methods scenario can be dissected into 11 classes of nodes with to elicit password or other sensitive information, or get di↵erent color. These nodes are interconnected and access or help to breach cybersecurity. Goal and sub- constitute an intuitive and vivid knowledge graph. By goals in area 1 form an attack tree structure, which en- this way, the security researchers can get an insight of ables to describe the multi-step attacks in progressive an attack quickly from the whole to the part. strategy or other complex attack scenarios. The mid- [5]All the CQL scripts for these application were submitted as dle part (area 2) depicts the attack mediums through supplementary material for review. which the attack methods are performed, and also the Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 15 of 20

self-disc… commit… informa…

normati…

t a k t a e

k n manipul… e y _ n b

e t _

_ f a t f e e k c

f e t e c sifmilarit… a f t n k e f _ c _ cognitiv… e e source_… b e n t _

y _ f _ n

b f e e e f y k c f … e a t e f t _ c f y

to_obtai… … te b __ _ n b t w e c k y t fe i a f persuasi… t a t k e h _ y _ fe e n fy e b y s n e reciproc… distracti… s d _ b k i _ by k b t_ _ t _ t a n_effec a t i a c e e t c l s k tak c l _ _ to f e e o ve agreeab… f t… f f t e f f e e i c e _ _ h t n c _ n e y conform… b email_in… a e k _b _ y k a t disclose… t t c o a a e t t f foot-in-… t f k ak e w espiona… y _ induce e en n i b t n _ e h … ef _ k _ a _ f fe t t by e _ s e c t fect f _ _ f k e e b l b _ f g i telepho… n f y en l … k f a l m… r a erfor vu e t t p i e

h n _ k e … reverse_… l a _ r w _ g t vu n _ i d h ve new_em… a t _ _ h g i a e n _ e u n a ve helpfuln… d s t ro h k integtarkaetni_…e _ k t b a tak ffect_by u il … th e h a en_e p t va l o _ r o ffe se i rf e ed a ct_b social_e… ga t e mr ap ct u y th o p r f p e er _ fo o ly_ d t _a d r r to l ta nd m ayn pe m _t _vu k impressi… _us t_ f … garveoup_i… en e f s e h ra i d _e _by t t y t c _ ake ff ffec a ve b e t n_ _ n_ e s preteexti… … … e c organiz… ga … i a tatk t ther_an craf _ pply_to _ d_use h ta c b attacker9 o c have_vul k e y t f a victim9_… e inexperi… f _ _… n e late cr o ted _ _ rmu af t ac e fo t_ inter f n an fe e d_ to c k p ly_ ha at e emapipl b ve t_ t a reverse_… te rf… a _vu b k la c r l u r i h y e social_re… ve n t n rm a h a e f a y i g ve _ b fo t g e ct_ h _ k fe c a u _ _ f _ef a vu e fn a e _ n make_a_… o tak e l n b c to d … r h _ t _ o o t o _ p r _t h credulit…e b y _ u e p h l a f y d f r t p t ve t e e to_get_i… fo r _ p e ak a t c e progres… r fo d c _ n m e vu t _e f r a t _ f o m m r a … l_ l b t oa r feee e k t ak g o d_b t y a e b f d ack_ n e k n su r _ to i n _ ve t ignoran… taken_effe e e e h y ct_nby ff e p _ ec i r b … interper… t _ f t o a e _ get_a_re… h … t _ b o a y t f c k k f _ e a c e e l disclose… n _ t t_ c a e n ave a e o kf f t e t fe _ _ i o k …f n b h _ e e e e c g phishin… y f f _ f a n e f _ b c o _ n e t u _ t e c b y e f s network y f t scarcity_… b k factors_… o _ f _ e …a _ t effe b l cen_ t c tak y a e t w f _ o inf tuitive… … i g t e h _ b deindivi… _ n ta u k trigger_t… s e e s n k k _e i a f l t fe ve l ct ie ve _b h e y c i ta a h t krecipien… _ c a y e o k f t a n to_provi… o s _ b y e _ u _ e _ b n l o t f peripher… b t _ a c _ f t e g e e

o c c f f o t f f g e _ e b a e f b area 2 f decepti… _ l c y u _ diffusio… n e t s o _ e _ b f k n y a e t

k

a

t create_a… convinc… languag… area 1 framing… facial_ex… indirect…

Figure 20 Analyze single social engineering attack scenario (e.g. scenario 9) by knowledge graph interaction form with targets (victims). The right part related issues and paid more in defense mea- depicts the nodes related to victim: the victim9 brings sures such as security awareness training. about certain attack consequences, due to he / she has vulnerabilities such as conformity, inexperience and helpfulness, which (are exploited by attack methods and) are taken e↵ect by mechanisms displayed in the pretend…

right edge nodes. Some relations (su↵er, to exploit, ex- t

o

_

e

x

p

l plain) are not displayed here to get a clear view, which baiting o

i

t to _e pretend… can be returned by adding CQL expressions, clicking xp lo it loit exp the node (expand / collapse relations) or using the to_ to_exploit pretexti… 8 helpfuln… to_exp loit t setting ”connect result nodes”. o t _ o piggyba… e _ x e p x t i … p l o l o p o i i l pretend… x t t p e t t _ o o x

_ e o _ e e _ t x x p o p t it vishing_… l l o lo o p 5.2.2 Analyze the most exploited human it x i e t t _ o_ o ex t vishing_… plo to it vulnerabilities _exploit exploit shoulde… credulit… to_ to_exploit As one of the confrontational focuses between social pretexti… t o_ t t ex o i o plo _ l it engineering attack and defense, human vulnerability is e p x t x e o p _ t _ o i 12 e l t t o i o x l p i o t l p l spear_p… x o what attackers want to exploit and what defenders / p e i x t _ e _ o smishing t o phishin… t victims want to eliminate or mitigate. Knowing the fre- it to lo _e xp x… _e quently exploited human vulnerabilities is of great sig- to whaling to_exploit manipul… nificance for social engineering defense. The exploited conform…

it frequency for each human vulnerability in the knowl- lo p 6 x _e edge base can be counted and ranked by CQL ex- to pressions (MATCH, COUNT, ORDER). Figure 21 ex- inducing tracts the top 3 human vulnerabilities most exploited by various kinds of attack methods: credulity, help- Figure 21 Find the most exploited (top 3) human vulnerabilities fulness and conformity. This suggests that these hu- man vulnerabilities should be watched out in security- Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 16 of 20

5.2.3 Analyze the most used attack mediums and attacker…

interaction forms c

r

a Similar to the analysis pattern in Section 5.2.2,the … attacker6 attacker… statistic analysis of attack mediums and interaction

whaling c

r

t

a

f

f

a

t

r

… forms can be executed to get an understanding of c to t _ i e o x l p where the social engineering attacks are frequently oc- p l x o e i _ t piggyba… o t water-h… curred. Figure 22 presents the top 3 mediums most to oit _ex xpl pl o_e oit t used to perform social engineering attack in the knowl- thinking… heuristic… apathy_… edge base: email, website and telephone. This reflects t t i o h l l o l a t _ o e vu ve _ vu p e x_ _ x x p _ p ve e that many social engineering attacks are performed l a l _ vu ve o h o i o a t i t t through network and electronic communication, mean- l h a… victim7_… trailing_… while reminds us to beware social engineering threat s… t rmh o fo a _ er ve e … p _vu … a h _ t t d i l when using these communication mediums. na t a o a ft_ ve l ra p c _ x vu e _ l o attacker7 t laziness…

phishin… ignoran… pretexti… reverse_… … t o r it _ o o e p f l x p r p p e e x e l r rf _e o f … o p i o r r to t h m r t m _ e… e d d e _ m t r h craft_an… craft_and_… fo attacker9 phishin... shoulde… attacker2 … r e 5 p email pretexti… p erfor … med_ rm th… rfo pe p …

telepho… e

m r phishing r f o o Figure 23 For specific victim, find additional threats beyond the f r r … m hr e _t p e ed d given scenario … rm 4 r _ o t f o h er f p r r e … p spear_p… pretend… for… per websites performed_through whaling

… p in scenario7; besides, three of these vulnerabilities can o e rf rf e o p 5 r… be also exploited by another 5 pairs of attacker and water-h… trojan_a… attack method. In short, for victim7 there are 5 ad- ditional and potential attack threats, and precautions Figure 22 Find the most used (top 3) attack mediums and should be taken against them. social interaction To evaluate this and the latter two analysis patterns, we extracted all the undirected and acyclic graphs (among red color edges) from an attacker to a vic- 5.2.4 Find additional (potential) threats for victims tim in the knowledge graph. This treatment generated (targets) a clear labeled dataset, meanwhile avoided the subjec- For specific victim (target), knowledge graph can be tivity in the process of labeling. In total, 345 reachable used to find additional (potential) threats beyond the paths (i.e. attack paths) were labeled. given scenario. The following analysis pattern can be Among these attack paths, 177 (attacker, attack extracted from the domain ontology and attack sce- method) pairs are labeled. For all the 15 victims, nario analysis: this analysis pattern find 156 new (attacker, attack 1. if (v1) → (hv) in S1 and method) threat pairs beyond the 21 pairs described in (a2) → (am2) → (hv) ← (v2) in S2 Table 5. Besides, the above analysis pattern recalls 176 2. then (a2) → (am2) → (hv) ← (v1) is feasible pairs without wrong cases. The recall rate is 99.43% and the F1 score is 99.71%. One pair was omitted due Namely: the attacker a2 can also employ the attack to one attack method’s edges to exploit hv were divided methods am2 to attack victim v1 (i.e. exploited the and assigned to other attack methods in the same sce- victim v1’s vulnerabilities hv), if a victim v1 has cer- nario. tain human vulnerabilities hv and exploited in sce- nario S1 meanwhile the hv are found also exploited in another scenario S2 by attacker a2 through attack 5.2.5 Find potential targets for attackers method am2. For specific attacker, knowledge graph can be used to Figure 23 shows this application where victim7 find additional or potential targets beyond the given serves as an example. It depicts that the victim7 has scenario. Similar to the previous analysis pattern, the five human vulnerabilities and exploited by attacker7 following was extracted: Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 17 of 20

99.43% and the F1 score is 99.71%. One pair was omit- victim5_… to y_ pl ted due to one attack method’s edges to exploit hv were ap r h ffe a su … divided and assigned to other attack methods in the manipul… surprise baiting to_ex ploit su same scenario. … a f extraver… l p fe t p p r o x ly e t _ _ i _t e o o o l to x t _ … p ex x e p l lo _

i o t vu victim8_… t l _ vu h have_ gluttony a to_ ve greed evexploi … l t a o t vu whaling _ ph hishing _ vu ave l to_ h piggybacki… h oit expl aveexpl l oit to_ _ to _vu vu m _ l r avee o h x exct item… t rf p o i e l _ t o happine… o o l _p vul e ha veit _ x _ p ve d ha p e x _vu n h x la l e a p _ o _ t t f ve i t l victim10… o t o o ra o i t c h_ ave _ t _ e _v… vu e

k l tac x … pretexting_… at p l

impulsi… vu l

o t _ loit attacker10… i o_ xp h t ex _e ve p to a loi a t ve h h _ a trojan_a… vu … intuitive_ju…

l loit it xp lo o_e xp t victim13… _e to l _vul ave vu h _ impulsion o_exploit t t ve o a intuitive… trojan_atta… _ h e xploit x to_e p _… l aveo

h i whaling t h … ave … x p e l _ _ vu x victim13_c… vu o _ t t e t l o t a s i o _

ve _ … u a lo _ o e t e … h p x x l x v… e p p _ l _ vu l ve o ao l o _ h i i t t t vu

ve victim6_… _ phishing a ploit r _exve h fe to victim15… uf a s greed h cr l… aft l… xp _a p _e nd p to _p pretexti… a … a… piggyba… t i o excitement attacker10 s… l p x victim9_… e _ o it t lo xp _e to

baiting Figure 24 For specific attacker, find potential targets (victims) beyond the given scenario Figure 25 For specific attacker and victim, find potential attack paths and methods

1. if (a1) → (am1) → (hv) ← (v1) in S1 and (am2) → (hv) ← (v2) in S2 2. then 5.2.6 Find paths from specific attacker to specific (a1) → (am1 or am2) → (hv) ← (v1) is feasible target For specific attacker and specific victim which are not Namely: the victim v2 can be also attacked by the in the same attack scenario, knowledge graph can be attacker a1 through attack method am1 or am2,if used to check or find feasible attack paths and poten- a victim v1 has certain human vulnerabilities hv and tial attack methods. This is a combination of the pre- exploited by attack method am1 crafted by attacker vious two analysis patterns, and the following pattern a1 in scenario S1 meanwhile the victim v2 is found was extracted: also has the same vulnerabilities hv in scenario S2 ex- 1. if (a1) → (am1) → (hv) in S1 and ploited by attack method am2. (v2) → (hv) in S2 Figure 24 shows this application where attacker10 2. then serves as an example. It presents that the attacker10 (a1) → (am1) → (hv) ← (v2) is feasible crafts and performs phishing to exploit victim10’s vul- Namely, the attack path from attacker a1 to target nerabilities in scenario10; moreover, another 6 targets v2 is feasible, if attacker a1 can successfully exploit have the same vulnerabilities that victim10 has and human vulnerability hv by attack method am1, mean- can be also exploited by attacker10 through phishing while the target v2 is found has the vulnerability hv. (or attack methods in other scenarios). In brief, 6 po- Figure 25 shows this application where attacker10 tential targets are found for attacker10. For practice, and victim13 serve as the examples. The following 4 it is helpful to notify all the potential targets if at- attack paths is extracted from the knowledge base: tacker10 or phishing is a serious security threat. If this (attacker10)-[craft and perform] (phishing)-[to ex- is a penetration testing, Figure 24 will o↵er testers ploit] (4 human vulnerabilities)! [has]-(victim13). more attack targets and attack methods. In addition,! another 5 attack methods that exploit the For all the 15 attackers, this analysis pattern find victim13’s vulnerabilities but not within the attack 123 new exploitable targets beyond the 15 victims de- paths are also presented in Figure 25.Thesemeth- scribed in Table 5, and 156 new (attack method, tar- ods are potentially available for attacker10 to reach gets) pairs beyond the 21 pairs described in Table 5. victim13. This analysis pattern recalls 176 (attack method, tar- For all the 15 attackers and 15 targets, this analysis gets) pairs without wrong cases. The recall rate is pattern find 251 new attack paths beyond the 94 paths Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 18 of 20

300 9 8.2 7.43 7.43 8 250 (A, AM) (AM, T/V) AP 7 200 6 Labeled 177 177 345 5 Recall 176 176 344 150 4

Recall rate 99.43% 99.43% 99.71% 100 2.67 3

F1 score 99.71% 99.71% 99.85% 2 50 1

0 0 A: attacker (A,AM) (AM,T/V) (A, T/V) AP AM: attack method existed in scenarios description 21 21 15 94 T/V: attack target / victim new find beyond the scenarios description 156 156 123 251 AP: attack paths folds 7.43 7.43 8.2 2.67

Figure 26 Experiment results and statistic analysis of Section 5.2.4, 5.2.5 and 5.2.6 described in Table 5, and 123 new (attacker, targets) victim10 in the same ”Company A”, given all these, pairs beyond the 15 pairs described in Table 5. For it can be inferred that these two scenarios compose a all 345 labeled attack paths, this analysis pattern re- same-origin and organized attack. Thus, we create new calls 344 attack paths without wrong cases. The recall relation ”same origin attack” between the two attack rate is 99.71% and the F1 score is 99.85%. One attack method nodes and relation ”in the same organization” path was omitted due to one attack method’s edges to between the two attacker nodes. exploit hv were divided and assigned to other attack methods in the same scenario. Figure 26 summarizes the experiment results and financial_g…

m y o statistic analysis of Section 5.2.4, 5.2.5 and 5.2.6. _b t iva ed t te iva d t _b o y m same_attack_organization

5.2.7 Analyze the same origin attack attacker10 attacker15 In general, the attack method am1 and am2 are similar same_attack_organization

c

c

r

r

a

a

f

f

t

t

_

_

a or related if they have some common features; am1 and a

… am2 might be launched by the same attacker if they Node.comment: same_origin_attack Node.comment: ... have certain crucial common features, e.g they point to ... malicious links trojan horse or back door with encoded URL phishing whaling with encoded domain the same domain address controlled (by attacker). Fur- address: att.eg.net ... same_origin_attack address: att.eg.net ...

o

o

t

t

_ ther, am1 and am2 is likely to be same-origin and the _

y

y

l

l

p

p

p

p

a attacker a1 and a2 is likely in the same attack organi- a zation, if above (am1, am2) are launched respectively same_affiliation Node.affiliation: Node.affiliation: victim10_c… by two di↵erent attackers (a1, a2) who are motivated Company A victim15_hi… Company A same_affiliation by the same motivation m to attack di↵erent victims (v1, v2) who have the same aliation. Based on above Figure 27 Analyze the same origin attack by knowledge graph cognition or assumption, Figure 27 shows the knowl- edge graph application example to analyze same origin attack. Besides returning the graph existed in the knowl- 6 Discussion edge base, new relations and nodes can be created. There are some studies related to social engineering A new relation ”same aliation” is created between ontology. Simmonds et al. [17] proposed a conceptu- victim10 and victim15, since they both have the data alization / ontology for network security attacks, in property ”aliation” with the equal value. There is a which components (access, actor, attack, threat, mo- potential relation ”same origin attack” between whal- tive, information, outcome, impact, intangible, system ing and phishing nodes, because in the whaling attack administrator) are included. Although some compo- Trojan horse or back door with encoded domain ad- nents (e.g. actor, motive, information) are similar to dress ”att.eg.net” is used meanwhile this address is concepts in this paper, the ontology [17] focuses on also found in the malicious link of phishing attack. network security (and access control), which cannot be Furthermore, due to attacker15 and attacker10 have used to describe social engineering domain. Oosterloo the same motivation ”financial gain” and victim15 and [18] presented an ontological chart, in which concepts Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 19 of 20

such as attacker, threat, risk, stakeholder and asset are just related to the class yet are not the instances are involved. But this chart is served as a model to of it (e.g. guilt, websites in attack type; sensitive infor- summarize and organize aspects related to social en- mation, password in threat). Besides, relations among gineering risk management, and the purpose is not a these classes were not described clearly. Thus, this formal and explicit description of concepts and rela- work is mainly oriented to the terms and classifica- tions in social engineering domain. Vedeshin [19]dis- tion. Nevertheless, we would like to appreciate above cussed three phases (orchestration, exploitation, and works and other researchers who make e↵orts in this compromise) of social engineering attacks, in which field. some classes (such as target, actor, goal, techniques, We develop a domain ontology of social engineering medium, execution steps and maintaining access) are in cybersecurity and conducts ontology evaluation by discussed. However, this taxonomy is used to classify knowledge graph application. di↵erent social engineering attacks. Mouton et al. [20] The domain ontology describes what entities signif- • described an ontological model of social engineering icantly constitute or a↵ect social engineering and attack consisted of six entities: social engineer, target, how they relate to each other, provides a formal and medium, goal, compliance principles and techniques. explicit knowledge schema, and can be used to un- However, the concept definitions of these entities were derstand, analyze, reuse and share domain knowl- not presented and the relations among these entities edge of social engineering. were also not specified. That is, it does not constitute a The 7 analysis examples by knowledge graph not • domain ontology. Besides, the social engineering defini- only show the ontology evaluation and application, tion in [20] is proposed form the perspective of persua- but also present new means to analyze social engi- sion, which describes only a part of social engineering neering attack and threat. [1]. As another result, the model does not include some In addition, the way that 1) use Prot´eg´eto develop important entities (e.g. human vulnerability) and as- • ontology, create instances and knoledge base 2) and pects (e.g. deception and trust). Tchakount´eet al. [21] then employ Neo4j to import RDF/OWL data, opti- discussed a certain spear phishing scenario / flow and mize knoledge base and construct knoledge graph for its description logic (DL), yet other social engineering better data analysis and visualization also provides attack types were not involved. Li and Ni [22]discussed a reference for related research. the diculty to distinguish social engineering attacks In the ontology, some taxonomies (subclasses) or re- (methods) collected from six studies. They identified • lations might be verbose or omitted. But as men- some core concepts to characterize social engineering tioned before, subclass name will be converted to attack by aligning these concepts with existing secu- rity concepts, and then provided a description logic node labels and inverse relations can facilitate the for a security ontology and attack classification. In the knowledge retrieval, and therefore, users can add or security ontology, social engineer, social engineering delete them based on specific application require- attack, human and human vulnerability were respec- ments. The material of attack scenarios and the data of on- tively aligned as subclass of attacker, attack, asset and • vulnerability; another two concepts attack media and tology+instances o↵er a dataset can be used for fu- social engineering techniques were also included. How- ture related research. The knowledge graph dataset ever, human is the target yet not the asset that so- (224 instances nodes, 344 resource nodes and 939 cial engineering attacks aim to harm, and according relations of 15 attack scenarios) seems small. Yet to their text and ontology implementation, social en- it covers 14 kinds of social engineering types, and gineering attack and technique seem to refer the same the 6 kinds of analysis patterns have demonstrated concept. This might be reasons why the concepts’ re- the various feasibilities of the proposed ontology and lations in their work were not aligned. Besides, the knowledge graph in analyzing social engineering at- domain ontology of social engineering is not the focus tack and threat. To the best of our knowledge, this is the first work of study [22], and the above six (or five) concepts are • not sucient to analyze relatively complex social engi- which completes a domain ontology for social en- neering attack incidents / scenarios. Alshanfari et al. gineering in cybersecurity, and further provides its [23] gathered some terms related to social engineer- knowledge graph application for attack analysis. ing and attempted to organize them by Prot´eg´eusing Due to the complexity of social engineering domain, method described in [2]. However, the terms were ex- the ontology seems impossible perfect in the only once tracted only from 30 publications from 2015 to 2018 establishment. We throw out a brick to attract a jade and only three entity classes (attack type, threat and and look forward superior studies by researchers in this countermeasures) were presented, in which some terms field. Wang et al. This paper was accepted by Cybersecurity (ISSN: 2523-3246) on 28 April. doi:10.1186/s42400-021-00094-6 Page 20 of 20

7 Conclusion 13. P. S. Maan, M. Sharma, Social engineering: A partial technical attack, This paper develops a domain ontology of social engi- International Journal of Computer Science Issues 9 (2012) 1694–0814. URL: https://pdfs.semanticscholar.org/7e51/ neering in cybersecurity, in which 11 concepts of core 0456042c26cade06d74ea755c774713c46cf.pdf. entities that significantly constitute or a↵ect the so- 14. K. D. Mitnick, W. L. Simon, The Art of Deception: Controlling the cial engineering domain together with 22 kinds of re- Human Element of Security, John Wiley & Sons, 2011. 15. Z. Wang, H. Zhu, L. Sun, Social Engineering in Cybersecurity: E↵ect lations among these concepts are defined. It provides a Mechanisms, Human Vulnerabilities and Attack Methods, IEEE Access formal and explicit knowledge schema to understand, 9(2021)11895–11910.URL: analyze, reuse and share domain knowledge of social https://doi.org/10.1109/ACCESS.2021.3051633. 16. Neo4j community edition 3.5.19, 15 June 2020. URL: engineering. Based on this domain ontology, this paper https://neo4j.com/download-center/#community. builds a knowledge graph using 15 social engineering 17. A. Simmonds, P. Sandilands, L. van Ekert, An Ontology for Network attack incidents / typical scenarios. The 7 knowledge Security Attacks, in: S. Manandhar, J. Austin, U. Desai, Y. Oyanagi, A. K. Talukder (Eds.), Applied Computing, Springer Berlin Heidelberg, graph application examples (in 6 kinds of analysis pat- Berlin, Heidelberg, 2004, pp. 317–323. terns) demonstrate that the ontology together with the 18. B. Oosterloo, Managing social engineering risk: making social knowledge graph can be used to analyze social engi- engineering transparant, Ph.D. thesis, University of Twente, 2008. URL: neering attack scenarios or incidents, to find (the top http://essay.utwente.nl/59233/1/scriptie_B_Oosterloo.pdf. ranked) threat elements (e.g. the most exploited hu- 19. A. Vedeshin, Contributions of Understanding and Defending Against man vulnerabilities, attack mediums), to find potential Social Engineering Attacks, Master’s thesis, Department of Computer Science, Tallinn University of Technology, 2016. attackers, targets and attack paths, and to analyze the 20. F. Mouton, L. Leenen, M. M. Malan, H. S. Venter, Towards an same origin attacks. Ontological Model Defining the Social Engineering Domain, in: ICT and Society, IFIP Advances in Information and Communication Technology, Springer, Berlin, Heidelberg, 2014, pp. 266–279. URL: Author details https: 1School of Cyber Security, University of Chinese Academy of Sciences, //link.springer.com/chapter/10.1007/978-3-662-44208-1_22. Beijing, CN. 2Beijing Key Laboratory of IoT Information Security 21. F. Tchakount´e, D. Molengar, J. M. Ngossaha, A Description Logic Technology, Institute of Information Engineering, Chinese Academy of Ontology for Email Phishing, International Journal of Information Sciences, Beijing, CN. Security Science 9 (2020) 44–63. 22. T. Li, Y. Ni, Paving Ontological Foundation for Social Engineering References Analysis, in: P. Giorgini, B. Weber (Eds.), Advanced Information 1. Z. Wang, L. Sun, H. Zhu, Defining Social Engineering in Systems Engineering, Springer International Publishing, Cham, 2019, Cybersecurity, IEEE Access 8 (2020) 85094–85115. URL: pp. 246–260. https://doi.org/10.1109/access.2020.2992807. 23. I. Alshanfari, R. Ismail, N. J. M. Zaizi, F. A. Wahid, Ontology-based 2. N. F. Noy, D. L. McGuinness, Ontology Development 101: A Guide to formal specifications for social engineering, International Journal of Creating Your First Ontology, Technical Report, Knowledge Systems Technology Management and Information System 2 (2020) 35–46. Laboratory, 2001. URL: https://protege.stanford.edu/ publications/ontology_development/ontology101.pdf. 3. M. A. Musen, Prot´eg´eTeam, The Prot´eg´eProject: A Look Back and aLookForward,AIMatters1(2015)4–12.URL: https://pubmed.ncbi.nlm.nih.gov/27239556. 4. D. Research, The Risk of Social Engineering on Information Security: ASurveyofITProfessionals,TechnicalReport,DimensionalResearch, www.dimensionalresearch.com, 2011. URL: https://www.stamx.net/files/ The-Risk-of-Social-Engineering-on-Information-Security. pdf. 5. A. Chitrey, D. Singh, V. Singh, A comprehensive study of social engineering based attacks in india to develop a conceptual model, International Journal of Information and Network Security 1 (2012) 45. 6. R. E. Indrajit, Social Engineering Framework: Understanding the Deception Approach to Human Element of Security, International Journal of Computer Science Issues (IJCSI) 14 (2017) 8–16. 7. B. Fang, The definitions of fundamental concepts, in: Cyberspace Sovereignty, Springer, 2018, pp. 1–52. 8. B. Fang, Define cyberspace security, Chinese Journal of Network and Information Security 4 (2018) 1–5. 9. P. Damle, Social engineering: A tip of the iceberg, Information Systems Control Journal 2 (2002) 51–52. 10. K. C. Redmon, Mitigation of Social Engineering Attacks in Corporate America, Greenville: East Carolina University (2005). 11. K. Ivaturi, L. Janczewski, A taxonomy for social engineering attacks, in: International Conference on Information Resources Management, Centre for Information Technology, Organizations, and People, 2011, pp. 1–12. URL: https://aisel.aisnet.org/cgi/viewcontent.cgi? article=1015&context=confirm2011. 12. C. F. Mohd Foozy, R. Ahmad, M. Abdollah, Y. Robiah, Z. Masud, Generic Taxonomy of Social Engineering Attack, in: MUiCET, 2011, pp. 1–7.