Exploiting Bounded Rationality in Risk-Based Cyber Camouflage Games

Exploiting Bounded Rationality in Risk-Based Cyber Camouflage Games

Exploiting Bounded Rationality in Risk-based Cyber Camouflage Games Omkar Thakoor1, Shahin Jabbari2, Palvi Aggarwal3, Cleotilde Gonzalez3, Milind Tambe2, and Phebe Vayanos1 1 University of Southern California, Los Angeles, CA 90007, USA fothakoor, [email protected] 2 Harvard University, Cambridge, MA 02138, USA fjabbari@seas., milind [email protected] 3 Carnegie Mellon University, Pittsburgh, PA 15213, USA fpalvia@andrew., [email protected] Abstract. Recent works have growingly shown that Cyber deception can effectively impede the reconnaissance efforts of intelligent cyber at- tackers. Recently proposed models to optimize a deceptive defense based on camouflaging network and system attributes, have shown effective numerical results on simulated data. However, these models possess a fundamental drawback due to the assumption that an attempted attack is always successful | as a direct consequence of the deceptive strategies being deployed, the attacker runs a significant risk that the attack fails. Further, this risk or uncertainty in the rewards magnifies the boundedly rational behavior in humans which the previous models do not han- dle. To that end, we present Risk-based Cyber Camouflage Games | a general-sum game model that captures the uncertainty in the attack's success. In case of the rational attackers, we show that optimal defender strategy computation is NP-hard even in the zero-sum case. We pro- vide an MILP formulation for the general problem with constraints on cost and feasibility, along with a pseudo-polynomial time algorithm for the special unconstrained setting. Second, for risk-averse attackers, we present a solution based on Prospect theoretic modeling along with a robust variant that minimizes regret. Third, we propose a solution that does not rely on the attacker behavior model or past data, and effective for the broad setting of strictly competitive games where previous solu- tions against bounded rationality prove ineffective. Finally, we provide numerical results that our solutions effectively lower the defender loss. Keywords: Game Theory · Cyber Deception · Rationality 1 Introduction Rapidly growing cybercrime [15, 13, 24], has elicited effective defense against adept attackers. Many recent works have proposed Cyber deception techniques to thwart the reconnaissance | typically a crucial phase prior to attacking [21, 17]. One deception approach is to camouflage the network by attribute obfusca- tion [10, 35, 7] to render an attacker's information incomplete or incorrect, cre- ating indecision over their infiltration plan [12, 10, 4, 28]. Optimizing such a 2 Thakoor et al. deceptive strategy is challenging due to many practical constraints on feasi- bility and costs of deploying, as well as critically dependent on the attacker's decision-making governed by his behavioral profile, and attacking motives and capabilities. Game theory offers an effective framework for tackling both these aspects and has been successfully adopted in security problems [2, 20, 31, 29]. Attacking a machine amounts to launching an exploit for a particular system configuration | information that is concealed or distorted due to the decep- tive defense, thus, an attempted attack may not succeed. Recent game theoretic models for deception via attribute obfuscation [30, 34] have a major shortcoming in ignoring this risk of attack failure as they assume that an attempted attack is guaranteed to provide utility to the attacker. Further, results from recent human subject studies [1] suggest that this risk may unveil risk-aversion in human at- tackers rather than a perfectly rational behavior of maximizing expected utility that the models assume. Apart from risk-aversion, other behavioral models, e.g., the Quantal response theory [22], also assert that humans exhibit bounded ra- tionality. This can severely affect the performance of a deployed strategy, which has not been considered by the previous works. As our first main contribution, we present Risk-based Cyber Camouflage Games (RCCG) | a crucial refinement over previous models via redefined strat- egy space and rewards to explicitly capture the uncertainty in attack success. As foundation, we first consider rational attackers and show analytical results in- cluding NP-hardness of optimal strategy computation and its MILP formulation which, while akin to previous models, largely require independent reasoning. Fur- ther, we consider risk-averse attackers modeled using Prospect theory [36] and present a solution (PT ) that estimates model parameters from data to compute optimal defense. To circumvent the limitations of parametrization and learn- ing errors, we also present a robust solution (MMR) that minimizes worst-case regret for a general prospect theoretic attacker. Finally, we propose a solution (GEBRA) free of behavioral modeling assumptions and avoiding reliance on data altogether, that can exploit arbitrary deviations from rationality. Our numerical results show the efficacy of our solutions summarized at the end. 1.1 Related work Cyber Deception Games [30], and Cyber Camouflage Games (CCG) [34] are game-theoretic models for Cyber deception via attribute obfuscation. In these, the defender can mask the true configuration of a machine, creating an uncer- tainty in the associated reward the attacker receives for attacking the machine. These have a fundamental limitation, namely, the assumption that the attacked machine is guaranteed to provide utility to the attacker. Further, they do not consider that human agents tend to deviate from rationality, particularly when making decisions under risk. Our refined model handles both these crucial issues. A model using Prospect theory is proposed in [38] for boundedly rational attackers in Stackelberg security games (SSG) [33]. However, it relies on using model parameters from previous literature, discounting the fact that they can largely vary for the specific experimental setups. We provide a solution that learns the parameters from data, as well as a robust solution to deal with uncer- Exploiting Bounded Rationality in Risk-based Cyber Camouflage Games 3 tainty in the degree of risk-aversion and broadly the parametrization hypothesis. A robust solution for unknown risk-averse attackers has been proposed for SSGs in [27], however, it aims to minimize the worst-case utility, whereas, we take the less conservative approach of minimizing worst-case regret. Previous works on uncertainty in security games consider Bayesian [18], interval-based [19], and regret-based approaches [23], however, these do not directly apply due to funda- mental differences between RCCGs and SSGs as explained in [34]. Another approach in [38] is based on the Quantal Response model [22]. How- ever, the attack probabilities therein involve terms that are exponential in re- wards, which in turn are non-linear functions of integer variables in our model, leading to an intractable formulation. However, we show effectiveness of our model-free solution for this behavior model as well. Machine learning models such as Decision Tree and Neural Networks have been used for estimating human behavior [8]. However, the predictive power of such models typically comes with an indispensable complexity (non-linear kernels, functions and deep hidden layers of neural nets, sizeable depth and branching factor of decision trees etc). This does not allow the predicted human response to be written as a simple closed-form expression of the instance features, viz, the strategy decision variables, preventing a concise optimization problem formulation. This is particularly problematic since the alternative of searching for an optimal solution via strategy enumeration is also non-viable | due to the compact input representation via a polytopal strategy space [16] in our model. MATCH [25] and COBRA [26] aim to tackle human attackers in SSGs that avoid the complex task of modeling human decision-making and provide robust- ness against deviations from rationality. However, their applicability is limited | in Strictly Competitive games where deviation from rationality always bene- fits the defender, they reduce to the standard minimax solution. Our model-free solution GEBRA on the other hand, achieves better computational results than minimax, and MATCH can be seen as its conservative derivative. 2 Risk-based Cyber Camouflage Games (RCCG) model Here, we describe the components of the RCCG model, explicitly highlighting the key differences with respect to the CCG model [34]. Network Configurations. The network is a set of k machines K := f1; : : : ; kg. Each machine has a true configuration (TC), which is simply an exhaustive tuple of attributes so that machines having the same TC are identical. S := f1; : : : ; sg is the set of all TCs. The true state of the network (TSN) is a vector n = (ni)i2S P with ni denoting the number of machines with TC i. Note that i2S ni = k. The defender can disguise the TCs using deception techniques. Each machine is \masked" with an observed configuration (OC). The set of OCs is denoted by T . Similar to a TC, an OC corresponds to an attribute tuple that fully comprises the attacker view, so that machines with the same OC are indistinguishable. Deception Strategies. We represent the defender strategy as an integer matrix Φ, where Φij is the no. of machines with TC i, masked with OC j. The observed 4 Thakoor et al. state of the network (OSN) is a function of Φ, denoted as m(Φ) := (mj(Φ))j2T , P where mj(Φ) = i Φij denotes the no. of machines under OC j for strategy Φ. Deception feasibility and costs. Achieving deception is often costly and not arbi- trarily feasible. We have feasibility constraints given by a (0,1)-matrix Π, where Πij = 1 if a machine with TC i can be masked with OC j. Next, we assume that masking a TC i with an OC j (if so feasible), has a cost of cij incurred by the defender, denoting the aggregated cost from deployment, maintenance, degraded functionality, etc. We assume the total cost is to be bounded by a budget B. These translate to linear constraints to define the valid defender strategy set: 8 9 Φij 2 ≥0;Φij ≤ Πijni 8(i; j) 2 S × T ; < Z = F = Φ P P P Φij = ni 8i 2 S; Φij cij ≤ B : j2T i2S j2T ; The first and the third constraints follow from the definitions of Φ and n.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us