Mohammed Noraden Alsaleh, Ehab Al-Shaer and Ghaith Husari

Total Page:16

File Type:pdf, Size:1020Kb

Mohammed Noraden Alsaleh, Ehab Al-Shaer and Ghaith Husari

Comprehensive Risk Analytics: Augmenting Host Compliance with Network Configuration

Mohammed Noraden Alsaleh, Ehab Al-Shaer and Ghaith Husari Department of Software and Information Systems University of North Carolina at Charlotte Charlotte, NC, USA Email: {malsaleh, ealshaer, ghusari}@uncc.edu

network and the connectivity of hosts can significantly Abstract—The security of cyber systems depends on the elevate the network-wide risk. Thus, it is mandatory to proper configuration of various heterogeneous, yet collectively analyze the compliance reports interdependent, com- ponents. Standards and automated tools have been used to scan and report the compliance of distributed devices with security requirements of information systems. However, there are no tools yet that can aggregate and analyze the collected compliance reports using logical and statistical analysis in a way that makes sense for decision makers. In this paper, we present a set of security configuration analytics that augments the host compli- ance reports with network configuration. First, we present top- down analytics that enable defining, verifying and enforcing se- curity policies by correlating network- wide XCCDF reports and network configurations. Second, we present bottom-up analytics that enable measuring global enterprise risk based on XCCDF reports, their inter- dependencies, and network configurations in order to devise cost-effective vulnerability mitigation plans. We take advantage of Security Content Automation Protocol (SCAP) languages and scoring systems along with the configuration to study vulnerabilities and compute the system’s exposure and risk scores.

I. INTRODUCTION Computer networks are broadly used in many areas, includ- ing social, economical, medical, and other human activities. The end points (hosts) constitute the network interface with both benign and malicious users. They also maintain system’s services and assets. Hence, hosts’ proper configuration and compliance with security policy is a key property to ensure the overall network security. Traditionally, the compliance of each host is evaluated individually (i.e., in isolation from the rest of the network). We believe that the hosts’ compliance state and the network core configuration need to be tied together in order to perform comprehensive security evaluation. For example, in order to verify a policy like “clients with weak passwords are not allowed to access the financial servers remotely.”, we need to: (i) identify the hosts that do not enforce strong password policy and (ii) verify whether they can reach the financial servers in which the remote access is enabled. We can see that the first step depends on the compliance state of the hosts and the last step depends on the network configuration. While multiple standard vulnerability scanning and scoring models, such as the Security Content Automation Protocol (SCAP), have been proposed, the interdependency of vulnerabilities distributed in the of all connected hosts and automatically integrate them particular vulnerabilities. with the network configuration for comprehensive risk The integration of network configuration provides live assessment. In this paper, we present security infor- mation about the countermeasures and assets analytics that augment the host compliance reports distribution in the network, which leads to precise with the network configuration in order to provide a exposure and risk estimation. Moreover, the holistic view of the system’s security. Our vulnerabilities of all potential threat sources are contribution in this work is twofold. First, we provide evaluated in order to calculate the risk associated with a top-down approach that allows administrators to a particular host. In this way, we capture the dependency define and verify enterprise security policies in terms between vulnerabilities residing on single or multiple of benchmarks and rules defined in XCCDF, a hosts. We use the standard SCAP specifications to description language introduced as part of SCAP. The communicate security checklists definitions and network level access control is tra- ditionally defined compliance reports. We also take advantage of the base based on the exact IP or the host name of the end metrics defined in the standard vulnerability scoring points. In our framework, the network access control is systems (CVSS, CMSS, and CCSS) in calculating configured based on the compliance state of the end exposure and risk scores. To integrate the hosts’ points and not only their addresses. We provide a compliance state with network configuration, we extend model checking framework to verify the rules an existing network configuration verification tool [3] to accordingly. Second, we provide a bottom-up approach encode the compliance reports as logical constructs to (i) assess the risk of cyber attacks considering the based on Binary Decision Diagrams (BDDs). We network-wide compliance reports and (ii) synthesize a formalize the mitigation plan- ning as a constraints mitigation plan to reduce the risk below an acceptable satisfaction problem and we solve it using the Z3 SMT threshold. We consider two mitigation actions: solver. vulnerability patching and network reconfiguration. The rest of the paper is organized as follows. In Network reconfiguration, such as blocking particular Section II, we present the system model. In the flows, may be necessary in some cases where sections III and IV, we patches are not released for (a) Top-down (b) Bottom-up approach. approach. Fig. 1: System Overview.

the network. ConfigChecker provides a language based on the Computation Tree Logic (CTL) [7] to express the enterprise policy rules as temporal properties. Properties can be defined TABLE II: Isolation in terms of the locations and the packet header fields TABLE I: Network Configuration connected using standard Boolean operators, as well as the CTL temporal operators. ConfigChecker translates the properties to Boolean present the top-down and the bottom-up analytics, expressions and executes them against the global transition respectively. relation. We extended the ConfigChecker model to encode Section V reports the performance evaluation. The related the compliance-based profiles in the global transition relation work is discussed in Section VI. Finally, we conclude and to allow for enterprise policy verification. We also extended present future plans in Section VII. the language interface to execute the reachability queries that II. SYSTEM MODEL are essential for estimating the hosts’ exposure and to express Our model depends on the hosts’ compliance reports, vul- properties in terms of compliance-based profiles. nerability scoring systems, network configuration, and B. Compliance State security requirements as depicted in Figure 1. The compliance reports generated by the vulnerability scanning tools identify The compliance state consists of the vulnerability scan- the vulnerabilities that exist in each host. The severity scores ning reports of the entire network reported using XCCDF of these vulnerabilities are provided by the standard documents, a standard XML-based specification language for measurement and scoring systems. The network configuration writing security checklists. Multiple security checklists may captures the connectivity and dependency between the hosts. be scanned for each host. XCCDF documents are used to The security requirements specify the required isolation levels communicate both the security checklists’ definitions and between hosts based on their compliance state. their results. A security checklist consists of a set of rules that directs the vulnerability scanning tools to detect common vul- A. Network Configuration nerabilities and misconfigurations. The rules may be In our model, the configuration of the entire network is organized in nested groups. The XCCDF specification abstracted as shown in Table I. The network Flows are deter- defines a set of mined by the basic IP header fields. The Isolation represents results that can be assigned to each rule, those are {Pass, the action applied by the countermeasures through which the Fail, Error, Unknown, Notapplicable, notchecKed, specified flow passes. The Loc is a unique ID for the device notSelected, Informational, or fiXed}. In the following in which the isolation level is enforced on the specified flow. discussion, we A value is assigned for each isolation level that intuitively use the capitalized letters as abbreviations for the results. determines the ability of counteracting attacks (i.e., the level The complete details of XCCDF structures and the meaning of protection). Table II shows examples of isolation levels. of these results can be found in the XCCDF specification This set may be extended to include multiple encryption document [21]. levels or to add other isolation levels based on the available Definition We define the compliance state of the entire net- countermeasures in the network. At this point, the protection work as the three-tuple S = (H, V, M), where: values of isolation levels are provided by the network comp • H represents the set of hosts. operator. We are planning to implement a procedure to V represents the set of common vulnerabilities. calculate them empirically. We build our framework on top of a network verification M is a matrix that maps each host to its vulnerabilities. M[h, v] P, F, E, U, N, K, S, I, X for all h H and tool called ConfigChecker [3]. ConfigChecker is a Binary ∈ { } ∈ v V. Decision Diagram (BDD) model checker that models the ∈ entire network configuration as a state machine. The state The common vulnerabilities are referenced by their unique space is the cross-product of the packet attributes by the IDs specified in the security checklists. The XCCDF spec- locations in ification allows security checklist authors to reference the Security Checklists (XCCDF Rules) standard names of common vulnerabilities and configuration Security settings defined in the public dictionaries, such as the Benchmarks Common Vulnerabilities and Exposures dictionary (CVE) [2] and the Common Configuration Enumeration (CCE) [1]. Scoring Model Default Absolute Default However, if the rule’s definition is missing the reference to the Compliance Score S standard name, we use the rule ID to reference the 1 S S 𝜂 vulnerability. 2

S C. Vulnerability P S 2 Measuremen r S 𝜂 1 t and o ∧ Scoring fil ⋈ e Vulnerability ⋈ ⋈ ⋈ measurement = { and scoring >, systems provide <, 𝝉 standard scoring = ∧ mechanisms for } � 𝜂 measuring the P exploitability ol ic � y

� 2

1

and the impact of configuration the common issues (involve vulnerabilities. the use of security NIST Inter- settings that agency Report negatively affect 7502 [19] the security), and categorizes the (3) software vulnerabilities feature misuse into three vulnerabilities categories: (1) (features in the software flaws software that (unintended provide an avenue errors in the to compromise design or coding the system). of software), (2) Three standard security scoring specifications have been Fig. 2: whil pora a e stems define the their own created to measure the XCC e l and n g equations to checklists for severity of the these DF- tem envi o r compute the their specific vulnerabilities: the Common based pora ron t i exploitability and needs. We Vulnerability Scoring Policy l ment h t the impact sub- provide the System (CVSS) that Struct and al e y scores, as well as ability to define addresses software flaws, the ure. envi metr r the final severity new Common Misuse Scoring ron ics t i score. benchmarks, System (CMSS) that men by h m referred to as addresses misuseIII. ENTERPRISE tal integ r p customized vulnerabilities, and the metr ratin e a benchmarks, XCCDF-BASED POLICY Common Config- uration ics g the e c either from Scoring System (CCSS) that VERIFICATION The enterprise capt live t scratch, as a addresses software secu- rity security policy determines the ure netw t , subset of a configuration issues. Each of isolation level exte ork o a standard these specifications describes rnal conf n (action) that should be checklist, or by three groups of metrics: base, enforced between a pair of fact igur m d combining temporal, and environmental hosts ors ation e different rules metrics. These groups are based on their compliance that in a a from multiple used to compute an with selective security can the s v stan- dard exploitability sub-score, an checklists. A security pote risk u a checklists. Our impact sub-score, and a final checklist, referred to as ntial asse r i framework severity score for each Benchmark, consists of a ly ss- e l provides a user vulnerability. number of rules. Each rule, affe ment a interface to load that checks for a specific ct proc t b a set of standard Definition We model the vulnerability in the target the ess. h i benchmarks that common vulnerabilities system, is assigned a value vuln Six e l can be used to scores as the three-tuple (called weight) in the erab metr i create the new S = (V, t, S), where: interval [0, 10]. The weight scoreV represents the set of ility, ics i t customized common vulnerabilities. is usually derived from the such are m y ones. The user standard vulnerability as defi p selects the scoring systems. The the ned a i required rules weights of the rules play a exist in c m or groups from key role in calculating the ence the t p the standard total compliance scores of of base ( a benchmarks and the benchmarks. coun metr c c adds them to term ics o t A. Defining Benchmarks the appropriate easu grou n ) location in the A large number of res p: f . tree structure of XCCDF documents for the or three i T the new common operating systems, the to d h benchmark. The web servers, and other char mea e e leafs of a applications are available in acter sure n s benchmark’s public repositories, such as istic the t t tree should be National Vulnerability s of expl i a XCCDF rules. Database (NVD) [15]. the oitab a n Recall that each However, the security syst ility l d XCCDF rule is checklists are not limited to em’s (acc i a associated with the public checklists. envi ess t r a single Organizations can define • ron vect y d vulnerability in men or, V according to 10] for all v ∈ V and sc • t : V → T is the type function t. auth i s that maps the vulnerability to ∈ {E, I, tt}. our model. We entic m c its category in T={soFtware flaw, Configuration issue, We use the base metrics in prov ation p o B. Defining misUse vulnerability}. our risk assessment model. ide , and a r Compliance-Based Profiles • S is the scores matrix, with a We believe that the base an acce c i row for each vulnerability and metrics are sufficient to alter ss t n A a column for each of measure the impact of nati com , g compliance- exploitability sub-score vulnerabilities as they ve plexi i based profile (E), impact sub-score (I), and integrity impact base measure the fundamental for ty), n s describes a class score (tt). S[v, sc] ∈ [0, attributes of vulnerabilities, tem and t y of hosts in terms of their compliance scores calculate the score of a par- with a single or multiple ticular host’s compliance benchmarks. The scoring of with a benchmark: the a benchmark should Default Model (DM), the consider the weights of the Flat Model (FM), the Flat rules as well as their Unweighted Model (UM), positions within the and the Absolute Model benchmark tree. The (AM). In our framework, we XCCDF specification [21] read the raw results from the provides four scoring scanning tools (i.e., the list models that can be used to of independent Fig. 3: Policy definition language syntax rules and whether their results are pass, fail, etc.). The user can select which model he likes to use in order to calculate the benchmark’s score. As depicted in Figure 2, a profile is defined as a Boolean relation in terms of benchmarks’ compliance scores. Figure 3 shows the complete syntax of the profile definition language. The profile can be defined as a single condition on the score of a predefined benchmark calculated according to a specific scoring model. The term Sc(B, M) represents the score of the TABLE III: Example benchmark B according to the scoring model M. C represents P describes the web servers that do not enforce proper an arbitrary threshold set by the user. Multiple conditions web on multiple benchmarks’ scores may also be combined using certificate validation. The last profile P describes the sql binary logical operators. servers that do not have proper encryption configuration or does not completely comply with the MSSQL 2012 STIG. C. Defining Security Policy Note that we used the Default scoring model for the B The security policy determines the isolation levels that sql benchmark while we used the Absolute scoring model for all should be enforced on the traffic flowing between given the others. Finally, we need to compose the Enforce rules that sources and destinations. The sources and destinations are de- will connect the source profiles and the destination profiles fined as compliance-based profiles rather than exact with the appropriate access control action. The final rules addresses. A policy consists of a set of “Enforce” rules. As appear in the Policy Rules section of Table III. depicted in Figure 2 and Figure 3, each rule consists of a source profile, a destination profile, and an action. We show IV. RISK MITIGATION PLANNING in Figure 3 four possible actions. However, the set of actions depends on the network configuration and it can be extended In this section, we present our formal model for automated based on the network hardware capabilities. risk mitigation planning. First, we introduce a set of metrics to quantify the network risk based on the hosts compliance D. Example - Putting it Together reports and the network configuration. Then, we formally In this section, we present a practical example of a simple define the risk mitigation planning as a constraints satisfaction enterprise policy and explain the process of composing the problem, such that the calculated risk scores drive the risk mitigation decisions. rules in terms of compliance-based profiles. The policy we are trying to verify is depicted in Table III. To verify this policy,A. Risk Assessment Model a comprehensive analysis of both the network configuration (reachability) and the XCCDF reports (vulnerabilities) should The goal of our risk assessment model is to measure the be conducted as follows. First, we define four customized global risk of a network, based on both the host immunity benchmarks as appears in the Benchmarks section of Ta- (estimated based on its compliance reports) and the network resistance (estimated based on the security configuration and ble III: B , B , B , and B . These benchmarks are xss lp enc cv reachability). To achieve this, we followed two design princi- composed based on the standard benchmarks “Apache 2.2 ples to define our risk metrics. First, the metrics should Security Technical Implementation Guide (STIG) - capture the vulnerabilities interdependency in the sense that Windows”, “MSSQL Server 2012 STIG”, and “Microsoft exploit- ing a particular vulnerability may increase the Exchange 2010 STIG”. Second, we define the compliance- likelihood of exploiting another vulnerabilities in the same based profiles to describe the sources and the destinations of device or other devices. Second, the metrics should consider the policy rules as appears in the Profiles section of Table the exposure of a potential victim to all potential threat III. The profile P describes the Apache web servers that ap sources inside or outside the network. Figure 4 shows a posses an XSS vulnerabilities (Sc(B , AM ) = 0 iff an XSS xss schematic diagram of our risk metrics. In the following, we vulnerability exists). The profile P describes the email email provide their mathematical definitions. servers that do not enforce the least privilege principle. In the same way, a potential threat source. The weight of the edge connecting Fig. 4: Risk Metrics. two nodes (i, j) is the protection level of the 1) Threat and Impact Indicators: We calculate a threat countermeasure deployed between i and j as defined in the and an impact quantitative indicators for each host in the network model. network. The ThreatIndicator measures the ability of the host Definition The Exposure of a victim is proportional to the to establish attacks against others. The value of this metric is number and the ThreatIndicator scores of the potential threat strongly correlated to the number and the severity of the sources, and it is inversely proportional to the network resis- tance applied in-between. Based on this definition, we formally define the Exposure of a host h as follows. Let T R be the reachability tree of the h host h and let (x → h) represent a path in the tree from the node x to the node h. Then the exposure Exp is defined as: h T hI host’s vulnerabilities E ( because exploitable x xp 2 vulnerabilities allow Resist attackers to compromise ) the host, install root-kits, h ance(x = and, even- tually, launch . → h) attacks to its neighbors. The ImpactIndicator x

h accumulates the severity (ImpactIndicator), de- noted scores of the host’s by T hI (ImI ), of a particular h h vulnerabilities. It is used to host h as the normalized sum weigh the estimated loss that of the integrity impact base may result from exploiting all scores (impact sub-scores) of the vulnerabilities in a particular host. Definition The ThreatIndicator of a host is proportional to the exploitability sub-scores and the integrity impact base scores of its vulnerabilities. Definition The ImpactIndicator of a host is proportional to the exploitability and impact sub- scores of its vulnerabilities. We use the integrity impact base score to calculate the ThreatIndicator, while we use the impact sub-score for the ImpactIndicator. According to NIST Report 7502 [19], in- tegrity refers to the trustworthiness and guaranteed veracity of information. The integrity impact measures the ability of an exploited vulnerability to modify the system files and install root-kits which can be used for spoofing and launching attacks. We formally define the ThreatIndicator Where Resistance(x h) represents the resistance→ exposure, its asset value, sources in terms of their patches and configuration of the and the number and severity ThreatIndicator scores, and updates that reduce the path from x to h with of its vulnerabilities. (3) the resistance of the paths global risk below a particular from the threat sources in threshold within a particular Resistance(h → h) being The ImpactIndicator terms of deployed budget. We formalize the 1. The introduced before is used to countermeasures. To calculate mitigation planner as a resistance in this definition is estimate the severity of the the exposure, we need to constraints satisfaction calculated as the summation host’s vulnerabilities. The evaluate the reachability to problem as appears in Figure of asset value of each host is the victim from all potential 5. the weights of all the edges given as part of the threat sources. We build a Mitigation Actions: In in the path (i.e., the network configuration. The 1) reachability tree for each host summation of the total risk associated with a our mitigation planner, we in the network. A node in countermeasures’ host Risk is formally con- sider the following protection values). h the reachability tree mitigation actions: defined as: Risk = A(h) 3) Risk Estimation: The h corresponds to • Vulnerability patching. For risk score in our model those vulnerabilities that Exp ImI depends on: (i) the × h × h have exposure to threat sources, patches released by the and (ii) the impact on the (3) vendors, the planner will assets distributed in the Where A(h) is the asset decide which of them network. value of the host h. The should be patched in order to satisfy the risk and global risk of the entire Definition The risk cost constraints. We associated with a particular network ttR is calculated N assume that a cost is host is pro- portional to its as the total sum of the supplied for each available the host’s vulnerabilities weighted by their risks associated with all patch. exploitability sub- N its hosts (i.e., ttR = Risk • Network reconfiguration. It scores: N . ). may be impossible in some h h ∈ cases to reduce the risk T . . Risk Mitigation Planner below the threshold by hI uS[u,E] , uS[u,E] ( B. ∈ ImI ∈ 1 only con- sidering the =h ×S[u, = h ×S[u, ) V V vulnerability patching hG] hI] . . Our mitigation planner actions because some vulnerabilities may not v v aims at finding a set of have patches. The ∈ ∈ vulnerability V V mitigation planner may S[ S decide to block some v [ flows in order to reduce , v E , the exposure of network ] E assets to threat sources and × ] satisfy the S[ × v S , [ G v ] , I ] Where V = v : v V, all the vulner- abilities h { ∈ M[h, v] ∈ {F, E}} is the identified in the system. set of 2) Threat Exposure: The active vulnerabilities in the Exposure of a host host h. S, V, and M are measures its attack surface defined based on the network’s in Section II. The configuration and ThreatIndicator (resp. compliance state. ImpactIndicator) is Specifically, the exposure normalized by dividing on depends on a number of the weighted sum of the factors: (1) the quantity of integrity impact base scores threat sources that can reach (resp. impact sub-scores) of the victim host, (2) the quality of these threat risk constraints as a result. We assume here that the cost of blocking a flow is estimated beforehand. Fig. 5: Mitigation Planning Formalization. N and N are p t To model these actions, we define two sets of decision precomputed normalization values for the ThreatIndicator and variables, P and B, as appears in the Decision Variables ImpactIndicator metrics. e , m , and g are the section of Figure 5. The decision variables capture the j j j output of the mitigation planner. If a satisfying solutions exploitability sub-score, the impact sub-score, and the for the constraints satisfaction problem is found, the integrity impact base score of the vulnerability j, respectively. τ and τ represent thresholds on the planner will assign a value of 0 to the variable p if the h gr ij vulnerability j in host i is recommended to be patched, individual risk of the host h and the network global risk, otherwise a value of 1 is assigned. In the same way, b = 0 respectively. ij if the flow from j to i is recommended to be blocked and b ij We can see that the memory requirement is increasing linearly = 1 otherwise. with the network size. However, the time required to build the Risk Computation: As the mitigation actions are driven 2) model is best described by a quadratic function. Based on by the risk assessment model, we define intermediate variables our experiments, the time and space requirements stay in safe in order to calculate the exposure and risk metrics. The rages even for large networks. Figure 6c shows the relation calculated values will be used in the constraints later. As between time required to run the basic reachability queries appears in the Risk Computation section of Figure 5, we define and the network size. The reachability analysis is required to the variables T hI , ImI , Exp , and Risk for each host h to h h h h compute the exposure. For each network instance we ran 30 compute its ThreatIndicator, ImpactIndicator, Exposure, and its reachability queries between random hosts. The figure reports individual Risk, respectively. the average running time. We can see that for networks of 3) Constraints: The planner expects three types of con- sizes below 1200 nodes, the query execution time was less straints from the user. First, a constraint on the global risk of the than one second. However, it starts increasing linearly after network. Second, a set of constraints on the risk associated with this size. the individual hosts/servers. This allows the administrator to The impact of the number of compliance-based profiles. In reduce the risk on specific critical servers, which, in some this experiment, we study the impact of increasing the number cases, might be more important than reducing the overall risk. of profiles on the space and time requirements. Each profile Finally, the planner reads the available budget which constrain is associated with a variable in the BDD model. Although we the total mitigation cost. do not expect a high number of profiles in practical networks, we studied the impact of up to 2000 profiles. We repeated the experiment for multiple networks with different sizes. In V. PERFORMANCE EVALUATION Figure 7a, we can see that the memory consumption increases In this section, we report the performance evaluation for linearly with the number of profiles. Figure 7b reports the the top-down and the bottom-up approaches under various time required to build the model with respect to the number network parameters. To create a large number of network of profiles. This figure, as well as Figure 6b, indicates that instances with varying parameters, we have developed a ran- the number of profiles has no direct impact on the time to dom network generator. The network generation starts with a network of hosts and routers where all hosts are connected to each other, then we distribute a set of firewalls randomly between the network nodes. The experiments were conducted on a standard PC with 3.4 GHz Intel Core i7 processor and 16 GB of RAM.

A. The Top-down Approach (Policy Verification) For the top-down policy verification approach, the space requirements are measured by the number of BDD nodes used to encode the full network configuration, including the XCCDF-based compliance reports. We also measure the time to build the model and execute reachability queries. We study the impact of network size and the number of compliance- based profiles on the space and time requirements. The impact of network size. In this experiment, we study the impact of network size, measured in the number of nodes. We generated networks with sizes that ranged from 200 to 4000 nodes. The hosts constitute 40-70 % of the total nodes in the network. We repeated the experiment four times with different numbers of profiles. Figures 6a and 6b show the memory and time requirements for building the BDD model. 300 80 200 250 60 150 200

150 100 40

100 50 20 50 0 0 0 200 600 1000 1400 1800 2200 200 700 1200 1700 200 600 1000 1400 1800 2200 2600 3000 3400 3800 2200 2700 3200 3700 2600 3000 3400 3800 N Network Size N e e t (c) Query execution time. t w w o o r r k k S S i i z z e e (b) Build (a) Space time. requirements.

F i g .

6 :

T h e

i m p a c t

o f

n e t w o r k

s i z e .

350 50 (a) Space requirements. 300 0 250 50 350 650 950 200 1250 1550 1850 Number of Profiles 150

100 4 5 0 er, for networks of sizes greater than 3000 The impact of the connectivity 0 nodes, the time to build the model starts degree. The connectivity refers to the 3 0 5 increasing linearly with the number of size of the reachability tree of each 0 profiles. In Figure 7c, we show the query 3 host, which represents the number of 0 execution time with respect to the number reconfiguration decision variables in 0 of profiles. We ran 30 random queries 2 our model (i.e., |B ). The connectivity degree is 5 against multiple networks whose sizes 0 the average varied between 200 and 2000 nodes and 2 size of the reachability trees in a network. We 0 reported the average running time. The 0 generated 1 number of profiles varied between 0 and multiple networks with connectivity 5 2000 profiles. The results show that the 0 degrees that varied from 25 to 300. We query execution time is independent from 1 also generated two sets of networks. In 0 the number of profiles. It is only affected 0 one set, the size of the networks was 5 by the network size. 50 hosts and in the other set, it was set 05 00 to 100 hosts. The results reported in N B. The Bottom-up Approach (Mitigation u Figure 8b show that the execution time m Planning) follows the same behavior in the two b sets of networks. The figure indicates a er For the bottom-up mitigation planning, of logarithmic complexity of the we report the time required to solve the Pr execution time with respect to the of constraints satisfaction problem using Z3 il connectivity degree. es SMT solver. We study the impact of the network size, the connectivity degree, The impact of the number of (b) and the number of vulnerabilities per host vulnerabilities. For this experiment, on the performance. For all experimental we generated two sets of networks. In settings, we ran the mitigation planner one set, the size of the networks was Fig. 7: The impact of the number of multiple times with different values for the 50 hosts and in the other set, it was set profiles. risk and the cost thresholds, including to 100 hosts. In both sets, the number some unsat cases, where no satisfiable of vulnerabilities per host varied b f solution is possible. between 15 and 125. We believe that u s The impact of the network size. For this 125 is a big number of vulnerabilities i i experiment, we generated a number of to exist in one host. The results are l z networks with varying sizes. We fixed the reported in Figure 8c. We can see that d e number of vulnerabilities per host to one, the execution time for the 100 nodes s and we generated two sets of networks. In network with 125 vulnerabilities per t l one set the connectivity degree was set to host is relatively high (around seven h e 25, while it was set to 50 in the other. The hours). However, the increase of the e s average execution time is reported in execution time with respect to the s Figure 8a. The results depict the same number of vulnerabilities per host is rd m t behavior for both connectivity degree best described by a 3 degree o h settings. Although, the solution requires polynomial. d a long time for large networks, the increase e n in the time requirements exhibits a VI. RELATED WORKS l polynomial complexity. s 3 0 In this section, we review the recent f 0 research conducted in the areas of o 0 security policy verification and r automated risk mit- igation, focusing n on those that employ standard n o specifications and scoring systems, e d such as XCCDF, OVAL, and CVSS. t e w s o . r H k o s w e o v 25500 3500

25 100 175 250 15 37 59 81 103 125 0 200 400 600 800 1000 Connectivity Degree Vulnerabilities/host Network Size (b) The Impact of the Connectivity Degree. (c) The Impact of Vulnerabilities Number. (a) The Impact of Network Size.

Fig. 8: The performance of the Buttom-up approach. The time is reported in seconds.

are driven by specific attack scenarios and they do not provide A. Security Policy Verification global quantitative risk scores. Homer et al. [8] propose a security metric model to ag- Recently, many formal models have been proposed to gregate vulnerability metrics, in an enterprise network, to address the problem of security and reachability policy ver- measure the likelihood of breaches within a given network ification. ConfigChecker [3] is a BDD model checker that configuration through attack graphs. In [6], Barrere et al. reads the global configuration of heterogeneous devices to present a model for generating vulnerability remediation plans verify network invariants. Mai et al. [14] employed a SAT- in network systems based on SAT solvers. In [10], the authors based approach that translates network invariants along with use attack graphs that augment feeds from vulnerability scan- a network data plane into a satisfaction problem to check for ners in order to prioritize vulnerability patching. These works configuration bugs and reachability problems. FLOVER [20], did not consider reconfiguration actions. Albanese et al. in [4] Veriflow [13], and NetPlumber [12] proposed new techniques propose an approximation algorithm to automatically generate and structures to model the network and verify security and network hardening recommendations based on attack graph reachability invariants. These approaches focus on the analysis. This solution requires complete information about network configuration but they are limited in their support for the attack exploits in terms of preconditions and it is limited host configuration. None of the aforementioned works in the types of vulnerabilities that are covered. considers the vulnerabilities of the host and the configuration In this work, we refine our preliminary risk model impact on the risk imposed by these vulnerabilities. On presented in [5] and we provide a mitigation planning solution another direction, attack graph security hardening techniques based on the risk model. We integrate the hosts’ compliance [18], [22] consider the vulnerabilities of hosts along with the reports with the global network configuration to address the network configuration. However, these techniques are attack vulnerability dependency based on hosts’ connectivity. Our oriented and they do not target security policy verification. mitigation ac- tions are not limited to vulnerability patching. We provide a compliance-aware policy description language We provide the ability to reconfigure the network in order to that provides the ability to express security policies in terms reduce the risk in cases where vulnerability patching is not of XCCDF objects. Our verification engine correlates all available. XCCDF reports and the network configuration to generate a comprehensive verification report. VII. CONCLUSIONS AND FUTURE WORK This paper investigates the feasibility of using security B. Risk Assessment and Vulnerability Mitigation com- pliance reports along with universal vulnerability scores Standard specifications and universal scoring systems have and network configuration in verifying enterprise security been recently utilized in many quantitative and qualitative risk policies and assessing the risk of cyberattacks. We presented a assessment models. Houmb et al. [9] derived a frequency and frame- work to compose and verify policy rules correlating impact metrics based on the CVSS score. They have network- wide XCCDF reports and global network combined these new metrics to quantitatively estimate the configuration. We have also presented a holistic risk model risk level of information systems. Joh and Malaiya [11] that measures the network global risk based on configuration employed a stochastic model based on CVSS metrics in a and compliance reports. We have employed the risk model in formal quan- titative approach for software risk evaluation. devising a vulnerability mitigation plan automatically by Poolsappasit et al. [17] proposed a model that uses the sub- identifying the required vulnerability patches and network scores reported by CVSS as vulnerability exploitation reconfigurations in order to reduce the risk to a particular probabilities that are fed to a Bayesian attack graph for threshold within a given budget. In the next step, we are calculating the global risk. None of these works integrates the planning to implement data analysis techniques to estimate the host vulnerabilities with the network configuration for protection levels of network countermeasures. We will also comprehensive analysis. Risk estimation techniques based on extend the mitigation planning problem to optimize the attack graphs [16], [17], [23] mitigation decisions. REFERENCES [12]P. Kazemian, M. Chan, H. Zeng, G. Varghese, N. McKeown, and S. Whyte. Real time network policy checking using header space [1]Common Configuration Enumeration (CCE). http://cce.mitre.org/. analysis. In NSDI, pages 99–111, 2013. [2]Common Vulnerabilities and Exposures (CVE). http://cve.mitre.org/. [13]A. Khurshid, W. Zhou, M. Caesar, and P. B. Godfrey. Veriflow: Verifying [3]E. Al-Shaer, W. Marrero, A. El-Atawy, and K. Elbadawi. Network network-wide invariants in real time. SIGCOMM Comput. Commun. configuration in a box: Towards end-to-end verification of network Rev., 42(4):467–472, Sept. 2012. reachability and security. In ICNP, pages 123–132, 2009. [14]H. Mai, A. Khurshid, R. Agarwal, M. Caesar, P. B. Godfrey, and S. T. [4]M. Albanese, S. Jajodia, and S. Noel. Time-efficient and cost-effective King. Debugging the data plane with anteater. SIGCOMM Comput. network hardening using attack graphs. In Dependable Systems and Net- Commun. Rev., 41(4):290–301, Aug. 2011. works (DSN), 2012 42nd Annual IEEE/IFIP International Conference [15]NIST. National Vulnerability Database (NVD). http://nvd.nist.gov/. on, pages 1–12, June 2012. [16]X. Ou and A. Singhal. Security risk analysis of enterprise networks [5]M. N. Alsaleh and E. Al-Shaer. Enterprise risk assessment based on using compliance reports and vulnerability scoring systems. In Proceedings attack graphs. In Quantitative Security Risk Assessment of Enterprise of the 2014 Workshop on Cyber Security Analytics, Intelligence and Networks, pages 13–23. Springer, 2011. Automation, SafeConfig ’14, pages 25–28, New York, NY, USA, 2014. [17]N. Poolsappasit, R. Dewri, and I. Ray. Dynamic security risk manage- ACM. ment using bayesian attack graphs. IEEE Transactions on Dependable [6]M. Barrere, R. Badonnel, and O. Festor. A sat-based autonomous and Secure Computing, 9(1):61–74, Jan 2012. strategy for security vulnerability management. In Network Operations [18]R. Sawilla and X. Ou. Identifying critical attack assets in dependency and Management Symposium (NOMS), 2014 IEEE, pages 1–9, May attack graphs. In S. Jajodia and J. Lopez, editors, Computer Security 2014. - ESORICS 2008, volume 5283 of Lecture Notes in Computer Science, [7]E. A. Emerson. Temporal and modal logic. Handbook of Theoret- pages 18–34. Springer Berlin Heidelberg, 2008. ical Computer Science, Volume B: Formal Models and Sematics (B), [19]K. Scarfone and P. Mell. The common configuration scoring system 995:1072, 1990. (CCSS): Metrics for software security configuration vulnerabilities, [8]J. Homer, S. Zhang, X. Ou, D. Schmidt, Y. Du, S. R. Rajagopalan, and December 2010. A. Singhal. Aggregating vulnerability metrics in enterprise networks [20]S. Son, S. Shin, V. Yegneswaran, P. Porras, and G. Gu. Model checking using attack graphs. Journal of Computer Security, 21(4):561–597, 2013. invariant security properties in openflow. In Communications (ICC), [9]S. H. Houmb, V. N. Franqueira, and E. A. Engum. Quantifying security 2013 IEEE International Conference on, pages 1974–1979, June 2013. risk level from CVSS estimates of frequency and impact. Journal of [21]D. Waltermire, C. Schmidt, K. Scarfone, and N. Ziring. Specification for Systems and Software, 83(9):1622 – 1634, 2010. the extensible configuration checklist description format (XCCDF) v1.2. [10]K. Ingols, R. Lippmann, and K. Piwowarski. Practical attack graph http://csrc.nist.gov/publications/nistir/ir7275-rev4/NISTIR-7275r4.pdf. generation for network defense. In Computer Security Applications [22]L. Wang, M. Albanese, and S. Jajodia. Attack graph and network Conference, 2006. ACSAC ’06. 22nd Annual, pages 121–130, Dec 2006. hardening. In Network Hardening, SpringerBriefs in Computer Science, [11]H. Joh and Y. K. Malaiya. Defining and assessing quantitative security pages 15–22. Springer International Publishing, 2014. risk measures using vulnerability lifecycle and cvss metrics. In The [23]X. Yin, Y. Fang, and Y. Liu. Real-time risk assessment of network 2011 international conference on security and management (sam), security based on attack graphs. In 2013 International Conference on 2011. Information Science and Computer Applications (ISCA 2013). Atlantis Press, 2013.

Recommended publications