A Bayesian Approach to Privacy Enforcement in Smartphones

A Bayesian Approach to Privacy Enforcement in Smartphones

A Bayesian Approach to Privacy Enforcement in Smartphones Omer Tripp, IBM Research, USA; Julia Rubin, IBM Research, Israel https://www.usenix.org/conference/usenixsecurity14/technical-sessions/presentation/tripp This paper is included in the Proceedings of the 23rd USENIX Security Symposium. August 20–22, 2014 • San Diego, CA ISBN 978-1-931971-15-7 Open access to the Proceedings of the 23rd USENIX Security Symposium is sponsored by USENIX A Bayesian Approach to Privacy Enforcement in Smartphones Omer Tripp Julia Rubin IBM Research, USA IBM Research, Israel Abstract user’s location; audio (microphone) and video (camera) Mobile apps often require access to private data, such data; etc. While private information often serves the core as the device ID or location. At the same time, popular functionality of an app, it may also serve other purposes, platforms like Android and iOS have limited support for such as advertising, analytics or cross-application profil- user privacy. This frequently leads to unauthorized dis- ing [9]. From the outside, the user is typically unable closure of private information by mobile apps, e.g. for to distinguish legitimate usage of their private informa- advertising and analytics purposes. This paper addresses tion from illegitimate scenarios, such as sending of the the problem of privacy enforcement in mobile systems, IMEI number to a remote advertising website to create a which we formulate as a classification problem: When persistent profile of the user. arriving at a privacy sink (e.g., database update or outgo- Existing platforms provide limited protection against ing web message), the runtime system must classify the privacy threats. Both the Android and the iOS plat- sink’s behavior as either legitimate or illegitimate. The forms mediate access to private information via a per- traditional approach of information-flow (or taint) track- mission model. Each permission is mapped to a desig- ing applies “binary” classification, whereby information nated resource, and holds per all application behaviors release is legitimate iff there is no data flow from a pri- and resource accesses. In Android, permissions are given vacy source to sink arguments. While this is a useful or denied at installation time. In iOS, permissions are heuristic, it also leads to false alarms. granted or revoked upon first access to the respective re- We propose to address privacy enforcement as a learn- source. Hence, both platforms cannot disambiguate le- ing problem, relaxing binary judgments into a quanti- gitimate from illegitimate usage of a resource once an tative/probabilistic mode of reasoning. Specifically, we app is granted the corresponding permission [8]. propose a Bayesian notion of statistical classification, which conditions the judgment whether a release point is Threat Model In this paper, we address privacy threats legitimate on the evidence arising at that point. In our due to authentic (as opposed to malicious) mobile ap- concrete approach, implemented as the BAYESDROID plications [4, 18]. Contrary to malware, such applica- system that is soon to be featured in a commercial prod- tions execute their declared functionality, though they uct, the evidence refers to the similarity between the data may still expose the user to unnecessary threats by in- values about to be released and the private data stored on corporating extraneous behaviors — neither required by the device. Compared to TaintDroid, a state-of-the-art their core business logic nor approved by the user [11] taint-based tool for privacy enforcement, BAYESDROID — such as analytics, advertising, cross-application pro- is substantially more accurate. Applied to 54 top-popular filing, social computing, etc. We consider unauthorized Google Play apps, BAYESDROID is able to detect 27 pri- release of private information that (almost) unambigu- vacy violations with only 1 false alarm. ously identifies the user as a privacy threat. Henceforth, we dub such threats illegitimate. 1 Introduction While in general there is no bullet-proof solution for privacy enforcement that can deal with any type of Mobile apps frequently demand access to private infor- covert channel, implicit flow or application-specific data mation. This includes unique device and user identifiers, transformation, and even conservative enforcement ap- such as the phone number or IMEI number (identify- proaches can easily be bypassed [19], there is strong evi- ing the physical device); social and contacts data; the dence that authentic apps rarely exhibit these challenges. 1 USENIX Association 23rd USENIX Security Symposium 175 According to a recent study [9], and also our empiri- 1 String mImsi = ...; // source cal data (presented in Section 5), private information is 2 // 6 digits <= IMSI (MCC+MNC+MSIN) <= 15 (usually 15) normally sent to independent third-party servers. Conse- 3 if (mImsi != null && quently, data items are released in clear form, or at most 4 (mImsi.length() < 6 mImsi.length() > 15)) || { following well-known encoding/encryption transforma- 5 loge(” invalid IMSI ” + mImsi); // sink tions (like Base64 or MD5), to meet the requirement of a 6 mImsi = null; } standard and general client/server interface. 7 log(”IMSI: ” + mImsi.substring(0, 6) + ”xxxxxxxxx”); // sink The challenge, in this setting, is to determine whether the app has taken sufficient means to protect user pri- Figure 1: Fragment from an internal Android library, vacy. Release of private information, even without user com.android.internal.telephony.cdma.RuimRecords, authorization, is still legitimate if only a small amount of where a prefix of the mobile device’s IMSI number information has been released. As an example, if an ap- flows into the standard log file plication obtains the full location of the user, but then re- leases to an analytics server only coarse information like the country or continent, then in most cases this would gineered to meet the performance requirements of a re- be perceived as legitimate. altime monitoring solution due to the high complexity of their underlying algorithms. As an example, McCamant and Ernst report on a workload on which their analysis Privacy Enforcement via Taint Analysis The short- spent over an hour. comings of mobile platforms in ensuring user privacy have led to a surge of research on realtime privacy mon- itoring. The foundational technique grounding this re- Our Approach We formulate data leakage as a clas- search is information-flow tracking, often in the form of sification problem, which generalizes the source/sink taint analysis [23, 15]: Private data, obtained via privacy reachability judgment enforced by standard information- sources (e.g. TelephonyManager.getSubscriberId(), flow analysis, permitting richer and more relaxed judg- which reads the device’s IMSI), is labeled with a taint ments in the form of statistical classification. The mo- tag denoting its source. The tag is then propagated tivating observation is that reasoning about information along data-flow paths within the code. Any such path release is fuzzy in nature. While there are clear exam- that ends up in a release point, or privacy sink (e.g. ples of legitimate versus illegitimate information release, WebView.loadUrl(...), which sends out an HTTP re- there are also less obvious cases (e.g., a variant of the quest), triggers a leakage alarm. example in Figure 1 with a 10- rather than 6-character The tainting approach effectively reduces leakage prefix). A statistical approach, accounting for multiple judgments to boolean reachability queries. This can po- factors and based on rich data sets, is better able to ad- tentially lead to false reports, as the real-world example dress these subtleties. shown in Figure 1 illustrates. This code fragment, ex- Concretely, we propose Bayesian classification. To la- tracted from a core library in the Android platform, reads bel a release point as either legitimate or illegitimate, the the device’s IMSI number, and then either (ii) persists Bayesian classifier refers to the “evidence” at that point, the full number to an error log if the number is invalid and computes the likelihood of each label given the ev- (the loge(...) call), or (ii) writes a prefix of the IMSI idence. The evidence consists of feature/value pairs. (of length 6) to the standard log while carefully masking There are many ways of defining the evidence. In this away the suffix (of length 9) as ’x’ characters. Impor- study, we concentrate on the data arguments flowing into tantly, data flow into the log(...) sink is not a privacy release operations, though we intend to consider other problem, because the first 6 digits merely carry model classes of features in the future. (See Section 7.) and origin information. Distinctions of this sort are be- Specifically, we induce features over the private values yond the discriminative power of taint analysis [26]. stored on the device, and evaluate these features accord- Quantitative extensions of the core tainting approach ing to the level of similarity between the private values have been proposed to address this limitation. A notable and those arising at release points. This distinguishes in- example is McCamant and Ernst’s [13] information-flow stances where data that is dependent on private values tracking system, which quantities flow of secret informa- flows into a release point, but its structural and/or quan- tion by dynamically tracking taint labels at the bit level. titative characteristics make it eligible for release, from Other approaches — based e.g. on distinguishability be- illegitimate behaviors. Failure to make such distinctions tween secrets [1], the rate of data transmission [12] or the is a common source of false alarms suffered by the taint- influence inputs have on output values [14] — have also ing approach [4]. been proposed. While these systems are useful as offline To illustrate this notion of features, we return to the ex- analyses, it is highly unlikely that any of them can be en- ample in Figure 1. Because the IMSI number is consid- 2 176 23rd USENIX Security Symposium USENIX Association similarity judgments over private values and values mImsi = ...; "446855561234" about to be released.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us