Dynamic Analyses for Privacy and Performance in Mobile Applications

Total Page:16

File Type:pdf, Size:1020Kb

Dynamic Analyses for Privacy and Performance in Mobile Applications Dynamic Analyses for Privacy and Performance in Mobile Applications Mingyuan Xia Doctor of Philosophy School of Computer Science McGill University Montreal, Quebec 2016-08-14 A Thesis Submitted to the Faculty of Graduate Studies and Research in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Copyright c 2016 Mingyuan Xia DEDICATION To my beloved family ii ACKNOWLEDGMENTS First and foremost, I deeply appreciate my supervisor Dr. Xue Liu for his patience and advice during my graduate study. I am also very fortunate to have Dr. Laurie Hendren and Dr. David Lie provide their invaluable feedbacks to improve my thesis work. I want to thank Dr. Zhengwei Qi and Dr. Yi Gao for collaboration in various research projects. At McGill, I would like to thank all members of CPSLab, staff of School of Computer Science, and Ron Simpson. And I enjoy the fun days with friends from MTSA and SJTU alumnus. I appreciate Howard Wang's great French skills and Nos Th´esfor brewing the best milk tea. At IBM Almaden, Dr. Pin Zhou and Dr. Mohit Saxena have provided the greatest mentorship during my internship. Finally I want to acknowledge the IBM Ph.D. fellowship, McGill Lorne Trottier Fellowship and NSERC for financially supporting my graduate career. iii ABSTRACT Mobile applications (also called apps) have greatly extended and inno- vated users' daily tasks. The mobile programming model features event-driven execution, rapid changing APIs (about three generations per year) and ubiqui- tous accesses to user's personal data. These features enrich app functionalities but also give rise to many new software problems that impact performance or damage user privacy, many of which are not occasional programming mistakes. In this thesis, we systematically study these problems and develop dynamic program analyses to effectively detect, diagnose and fix these new problems. We start by researching the sensitive data leakage problem in apps. Since mobile apps can access various sensitive user data stored on the device, data leaks become a great concern for both end users and app market operators. Ex- isting leak detecting approaches rely on static analysis that does not perform well on real-world apps with growing complexity, further limiting their adop- tion for real usage. We propose AppAudit, which embodies a novel dynamic analysis that can execute part of the app code while tracking the dissemination of sensitive data. AppAudit also has a static analysis to shrink analysis scope and boost analysis performance. The synergy of two analyses achieves higher detection accuracy, runs 8.3× faster and uses 90% less memory on real-world Android apps as compared to previous approaches. Based on the analysis building blocks from AppAudit, we further develop binary instrumentation to profile and improve app performance. We study 115 thousand apps and common performance anti-patterns from existing lit- erature. Based on these understandings, we propose AppInspector, which instruments apps to profile a small set of methods while collecting various app runtime diagnostic data. These profiling data is transformed into a graph iv structure, where AppInspector programmatically diagnoses three common per- formance anti-patterns from this graph. We also develop AppSwift based on AppInspector, which transforms app code to automatically fix some perfor- mance anti-patterns and improve app performance. Both tools instrument app code automatically. Instrumented apps can run on unmodified Android OSes and thus being readily deployable to existing test environments. With extensive tests on real-world apps, AppInspector uncovers 22 performance is- sues per app, with detailed analysis results to guide developers to fix them; AppSwift automatically eliminates about 5 of such issues without any code modification from the app developer. We believe that the analysis method- ologies, frameworks and tools developed in this thesis can assist developers in debugging various performance problems and better protecting user privacy. v ABREG´ E´ Les applications mobiles (´egalement appel´esapps) ont consid´erablement ´etenduet innov´eeles t^aches quotidiennes des utilisateurs. Le mod`elede pro- grammation mobile dispose d'ex´ecution´ev´enementielle, API ´evolution rapide (environ trois g´en´erationspar an ) et omnipr´esente des acc`esaux donn´eesper- sonnelles de l'utilisateur. Ces fonctionnalit´esenrichissent app fonctionnalit´es, mais aussi donner lieu `ade nombreux probl`emesnouveaux logiciels que la per- formance de l'impact ou de dommages utilisateur vie priv´ee,dont beaucoup ne sont pas des erreurs de programmation occasionnelles. Dans cette th`ese,nous ´etudions syst´ematiquement ces probl`emeset d´eveloppons le programme dy- namique des analyses pour d´etecterefficacement, diagnostiquer et r´esoudreces nouveaux probl`emes. Nous commen¸conspar rechercher le probl`emede fuite de donn´eessensibles dans des apps. Comme les applications mobiles peuvent acc´eder`adiverses donn´eessensibles de l'utilisateur stock´essur l'appareil, les fuites de donn´eesdevient une grande pr´eoccupation pour les utilisateurs fin- aux et les op´erateursdu march´ede l'app. Les m´ethodes de d´etectionde fuites existantes s'appuient sur l'analyse statique qui ne fonctionne pas bien sur les applications dans le monde rel avec une complexit´ecroissante. Nous proposons AppAudit, qui incarne une nouvelle analyse dynamique qui peut ex´ecuterla partie de l'app code tout en effectuant le suivi de la diffusion des donn´eessensi- bles. AppAuditposs`ede´egalement une analyse statique pour r´etr´ecirl'analyse des performances de l'analyse et boost scopie. La synergie des deux analyses permet d'obtenir une plus grande pr´ecisionde d´etection,8.3x plus rapide et utilise ex´ecute90% moins de m´emoiresur les applications Android dans le monde r´eelpar rapport aux approches pr´ec´edentes. Sur la base des blocs de vi construction de l'analyse d'AppAudit, nous d´eveloppons l'instrumentation bi- naire au profil et am´elioronsles performances des applications. Nous ´etudions 115 mille applications et performance communs anti-mod`eles `apartir de la litt´eratureexistante. Sur la base de ces accords, nous proposons AppInspector, qui instrumente applications au profil d'un petit ensemble de m´ethodes tout en recueillant des donn´eesde diagnostic diff´erentes application d'ex´ecution. Ces donn´eesde profilage se transforme en une structure de graphe, o`uAppIn- spector diagnostique trois performances commune anti-mod`eles`apartir de ce graphique. Nous d´eveloppons ´egalement AppSwift bas´esur AppInspector, qui transforme le code de l'application pour corriger automatiquement certaines performances anti-mod`eleset d'am´eliorer les performances des applications. Les deux outils instrument code de l'application automatiquement. Les appli- cations instrument´eespeuvent fonctionner sur les syst`emesd'exploitation An- droid non modifi´eset donc ^etrefacilement d´eployable `ades environnements de test existants. Avec des tests approfondis sur les applications du monde r´eel, AppInspector d´ecouvre22 probl`emesde performance par application, avec des r´esultatsd'analyse d´etaill´espour guider les d´eveloppeurs de les corriger; AppSwift ´elimine automatiquement environ 5 de ces questions sans aucune modification de code `apartir du d´eveloppeur de l'application. Nous croyons que les m´ethodes d'analyse, les cadres et les outils d´evelopp´esdans cette th`ese peuvent aider les d´eveloppeurs `ad´eboguer divers probl`emesde performance et une meilleure protection de la vie priv´eedes utilisateurs. vii TABLE OF CONTENTS DEDICATION............................... ii ACKNOWLEDGMENTS......................... iii ABSTRACT................................ iv ABREG´ E´.................................. vi LIST OF TABLES............................. xi LIST OF FIGURES............................ xiii 1 Introduction..............................1 1.1 Contributions.........................3 1.2 Thesis Organization......................5 2 Background..............................6 2.1 Android System Hierarchy..................6 2.2 Android Applications.....................7 2.2.1 Code, Manifest, and Resources............8 2.2.2 Execution Model and Performance.......... 10 2.2.3 Permission and Privacy................ 11 3 AppAudit: Analyzing and Detecting Data Leaks.......... 14 3.1 The Information Flow Problem Revisited.......... 14 3.2 Related Work......................... 17 3.2.1 Static Analysis..................... 17 3.2.2 Dynamic Analysis................... 19 3.2.3 Compiler Techniques................. 19 3.3 The Synergy of Two Analyses................ 19 3.4 API Usage Analysis...................... 21 3.4.1 Call Graph Extensions................ 21 3.4.2 API Usage Analysis.................. 24 3.5 Approximated Execution................... 25 3.5.1 Object and Taint Representation........... 26 3.5.2 Basic Execution Flow................. 27 3.5.3 Complete Execution Rules.............. 30 3.5.4 Tainting Rules..................... 31 viii 3.5.5 Execution Extensions and Optimizations...... 32 3.5.6 Approximation Mode................. 32 3.5.7 False Positive Analysis: Execution Path Validation. 35 3.5.8 False Negative Analysis: Tainting Validation.... 38 3.5.9 Infinity Avoidance................... 40 3.6 Evaluation........................... 41 3.6.1 Implementation.................... 42 3.6.2 Evaluation Methodology............... 44 3.6.3 Completeness of Static API Analysis......... 45 3.6.4 Detection Accuracy.................. 46 3.6.5 Usability.......................
Recommended publications
  • Comparison of GUI Testing Tools for Android Applications
    Comparison of GUI testing tools for Android applications University of Oulu Department of Information Processing Science Master’s Thesis Tomi Lämsä Date 22.5.2017 2 Abstract Test automation is an intriguing area of software engineering, especially in Android development. This is since Android applications must be able to run in many different permutations of operating system versions and hardware choices. Comparison of different tools for automated UI testing of Android applications is done in this thesis. In a literature review several different tools available and their popularity is researched and the structure of the most popular tools is looked at. The two tools identified to be the most popular are Appium and Espresso. In an empirical study the two tools along with Robotium, UiAutomator and Tau are compared against each other in test execution speed, maintainability of the test code, reliability of the test tools and in general issues. An empirical study was carried out by selecting three Android applications for which an identical suite of tests was developed with each tool. The test suites were then run and the execution speed and reliability was analysed based on these results. The test code written is also analysed for maintainability by calculating the lines of code and the number of method calls needed to handle asynchrony related to UI updates. The issues faced by the test developer with the different tools are also analysed. This thesis aims to help industry users of these kinds of applications in two ways. First, it could be used as a source on what tools are over all available for UI testing of Android applications.
    [Show full text]
  • Mobile Developer's Guide to the Galaxy
    Don’t Panic MOBILE DEVELOPER’S GUIDE TO THE GALAXY U PD A TE D & EX TE ND 12th ED EDITION published by: Services and Tools for All Mobile Platforms Enough Software GmbH + Co. KG Sögestrasse 70 28195 Bremen Germany www.enough.de Please send your feedback, questions or sponsorship requests to: [email protected] Follow us on Twitter: @enoughsoftware 12th Edition February 2013 This Developer Guide is licensed under the Creative Commons Some Rights Reserved License. Editors: Marco Tabor (Enough Software) Julian Harty Izabella Balce Art Direction and Design by Andrej Balaz (Enough Software) Mobile Developer’s Guide Contents I Prologue 1 The Galaxy of Mobile: An Introduction 1 Topology: Form Factors and Usage Patterns 2 Star Formation: Creating a Mobile Service 6 The Universe of Mobile Operating Systems 12 About Time and Space 12 Lost in Space 14 Conceptional Design For Mobile 14 Capturing The Idea 16 Designing User Experience 22 Android 22 The Ecosystem 24 Prerequisites 25 Implementation 28 Testing 30 Building 30 Signing 31 Distribution 32 Monetization 34 BlackBerry Java Apps 34 The Ecosystem 35 Prerequisites 36 Implementation 38 Testing 39 Signing 39 Distribution 40 Learn More 42 BlackBerry 10 42 The Ecosystem 43 Development 51 Testing 51 Signing 52 Distribution 54 iOS 54 The Ecosystem 55 Technology Overview 57 Testing & Debugging 59 Learn More 62 Java ME (J2ME) 62 The Ecosystem 63 Prerequisites 64 Implementation 67 Testing 68 Porting 70 Signing 71 Distribution 72 Learn More 4 75 Windows Phone 75 The Ecosystem 76 Implementation 82 Testing
    [Show full text]
  • Behavioral Analysis of Android Applications Using Automated Instrumentation
    2013 Seventh International Conference on Software Security and Reliability Companion Behavioral Analysis of Android Applications Using Automated Instrumentation Mohammad Karami, Mohamed Elsabagh, Parnian Najafiborazjani, and Angelos Stavrou Computer Science Department, George Mason University, Fairfax, VA 22030 { mkarami, melsabag, pnajafib, astavrou}@gmu.edu Abstract—Google’s Android operating system has become one application is not a straight forward task due to variety of the most popular operating system for hand-held devices. Due inputs and heterogeneity of the technologies [12]. to its ubiquitous use, open source nature and wide-spread Two primary methods are being employed for mobile appli- popularity, it has become the target of recent mobile malware. In this paper, we present our efforts on effective security cation analysis: white-box approach and black-box approach. inspection mechanisms for identification of malicious applications In black-box testing only the inputs and outputs of the appli- for Android mobile applications. To achieve that, we devel- cation are being exercised. On the other hand, for white box oped a comprehensive software inspection framework. Moreover, approach the source code need to be analyzed. Since the source to identify potential software reliability flaws and to trigger code of the malicious applications that we get from Google malware, we develop a transparent instrumentation system for automating user interactions with an Android application that Play is not available we cannot analyze the internal structure does not require source code. Additionally, for run-time behavior of the malicious applications to figure out what they exactly analysis of an application, we monitor the I/O system calls gener- do, but we can utilize the black-box testing to define their ated the by application under monitoring to the underlying Linux functionality.
    [Show full text]
  • Unit Testing, Integration Testing and Continuous Builds for Android
    Unit Testing, Integration Testing and Continuous Builds Manfred Moser simpligility technologies inc. http://www.simpligility.com @simpligility Agenda Get an overview about testing and continuous integration for Android app development Why testing? What can we test? How can we do it? 2 Manfred Moser simpligility.com Apache Maven See previous presentation Maven used to control build and more Good library reuse and dependency use – makes testing easier out of the box Strong tool support But its all possible without Maven too... Why (automated) testing? Find problem early and you ● Can fix it quickly ● Safe money on QA testing ● Do not get negative customer feedback ● Deal with feature requests instead of bugs ● Avoid production problems ● Can refactor (and change) without breaking old stuff 4 Manfred Moser simpligility.com What are we testing? Plain java code Android dependent code Configuration User interface Look and feel 5 Manfred Moser simpligility.com JVM vs Dalvik/Android stack JVM based: ● Faster ● More tools available ● More mature tooling Dalvik based: ● Necessary for integration tests ● Reproduce actual behaviour ● Full stack testing (beyond VM, to native..) 6 Manfred Moser simpligility.com JVM testing tools ● JUnit ● TestNG ● EasyMock ● Unitils ● Cobertura ● Emma ● and many more 7 Android SDK Test Tools ● Integrated Junit ● use on emulator/device though ● Instrumentation Test Tools ● rich set of classes for testing ● now well documented ● MonkeyRunner ● control device/emulator running tests ● take screenshots ● jython 8 Dalvik/Android
    [Show full text]
  • Large-Scale Android Dynamic Analysis
    Andlantis: Large-scale Android Dynamic Analysis Michael Biermayz, Eric Gustafsonz, Jeremy Ericksony, David Fritzy, Yung Ryn Choey ∗ySandia National Laboratories fmbierma, jericks, djfritz, [email protected] zUniversity of California, Davis fmhbierma, [email protected] Abstract— In this paper, we present Andlantis: a highly scalable Analyzing Android applications for malicious behavior is an dynamic analysis framework for analyzing applications on important area of research, and is made difficult, in part, by the Android operating system. Andlantis runs the Android the increasingly large number of applications available for the operating system in a virtualized environment and is able platform. While techniques exist to perform static analysis on a to provide the virtual device with artificial network data in large number of applications, dynamic analysis techniques are order to provide an environment which closely replicates that relatively limited in scale due to the computational resources of a physical device. Andlantis is able to schedule and run required to emulate the full Android system to achieve accurate thousands of Android instances in parallel, enabling us to execution. We present Andlantis, a scalable dynamic analysis investigate the behavior of mobile malware at scale. system capable of processing over 3000 Android applications per hour. During this processing, the system is able to collect Andlantis employs a scalable high-performance emulytics valuable forensic data, which helps reverse-engineers and mal- framework, minimega, to parallelize this expensive task as ware researchers identify and understand anomalous application much as possible and achieve a level of throughput un- behavior. We discuss the results of running 1261 malware samples precedented in Android dynamic analysis.
    [Show full text]
  • Guide to Test Automation Tools 2017 - 2018
    Guide to Test Automation Tools 2017 - 2018 WHITEPAPER QATestlab 2017 Copyright 2017 ©QATestLab. All Rights Reserved Table of Contents Summary 3 Introduction 3 1. Test Automation Tools. Market review 1.1. Selenium WebDriver Framework 4 1.2. Appium Framework 5 1.3. Robotium Framework 7 1.4. Serenity Framework 9 1.5. Robot Framework 10 1.6. Galen Framework 12 1.7. HP Unified Functional Testing (UFT) 14 1.8. Ranorex Studio 16 1.9. TestComplete 19 1.10. Telerik Test Studio 20 1.11. Applitools Eyes 22 1.12. Test Automation Tools and Frameworks: Comparison of 23 Technical Aspects 2. Test Automation Tools Approved by QATestLab 2.1. Selenium WebDriver 26 2.2. Appium 28 2.3. TestComplete 29 2.4. Ranorex Studio 31 3. Summary 32 Contact Information 33 2 Copyright 2017 ©QATestLab. All Rights Reserved Summary Table of Contents Click the section to jump This whitepaper aims at providing the comprehensive data on the most ahead popular test automation tools in 2017 - 2018 including the description of Summary their parameters which can be considered when selecting a tool / framework for test automation. The document also provides the Introduction comparison of the leading test automation tools highlighting both 1. Test Automation advantages and disadvantages, and also main objectives, technical Tools. Market review characteristics and the information about a provider. 1.1. Selenium WebDriver Framework The whitepaper is aimed to assist in selecting a proper test automation 1.2 Appium Framework tool avoiding time and money losses. Besides, it includes the 1.3 Robotium recommendations on the most effective test automation tools, Framework 1.4 Serenity Framework information about their effectiveness and maintainability, which were 1.5 Robot Framework prepared by QATestLab on the ground of successful execution of 50 test 1.6 Galen Framework automation projects.
    [Show full text]
  • Profiling the Responsiveness of Android Applications Via Automated
    Profiling the Responsiveness of Android Applications via Automated Resource Amplification Yan Wang Atanas Rountev Ohio State University Ohio State University ABSTRACT Android run-time|the user may decide to uninstall the ap- The responsiveness of the GUI in an Android application is plication and/or rate it negatively in the app market. an important component of the user experience. Android Android guidelines [9, 6] are very clear on the importance guidelines recommend that potentially-expensive operations of designing responsive applications. The general rule is the should not be performed in the GUI thread, but rather in following: \In any situation in which your app performs a separate threads. The responsiveness of existing code can potentially lengthy operation, you should not perform the be improved by introducing such asynchronous processing, work on the UI thread, but instead create a worker thread either manually or automatically. and do most of the work there." [9]. One simple view is that all potentially-expensive opera- There are various mechanisms for achieving this goal. Typ- tions should be removed from the GUI thread. We demon- ical examples include user-managed threads, AsynchTask, strate that this view is too simplistic, because run-time cost and IntentService. The responsiveness of existing code under reasonable conditions may often be below the thresh- can be improved by introducing these mechanisms either old for poor responsiveness. We propose a profiling ap- through manual refactoring or by using automated transfor- proach to characterize response times as a function of the mations (e.g., [12, 11]). A natural question that arises in size of a potentially-expensive resource (e.g., shared prefer- this context is the following: which operations should be re- ences store, bitmap, or SQLite database).
    [Show full text]
  • Automation for Mobile Apps
    automation for mobile apps SeConf 13 Workshop http://appium.io/seconf.pdf Jonathan Lipps | @jlipps | Sauce Labs @TheDanCuellar | @maudineormsby appium is the cross-platform solution for native and hybrid mobile automation appium introduction 1 2 3 4 5 6 iOS Android calabash-ios calabash-android Frank MonkeyTalk UIAutomation Robotium ios-driver UiAutomator KeepItFunctional selendroid Philosophy R1. Test the same app you submit to the marketplace R2. Write your tests in any language, using any framework R3. Use a standard automation specification and API R4. Build a large and thriving open-source community effort Platforms • Real devices (iOS, Android) • Simulators (iOS, Android, FirefoxOS) • Hybrid apps (iOS, Android, FirefoxOS) • Safari on iOS • Chrome on Android • Robot-controlled devices Architecture • Apple Instruments & UIAutomation for iOS • Google UiAutomator for Android (4.2.1 up) • Selendroid for older Android & hybrid • Selenium WebDriver interface Selenium WebDriver? • this is SeConf, isn’t it? appium.app 1 2 3 4 5 6 Appium.app • GUI for launching Appium server • Monitor status • Set preferences Appium.app • Inspector for probing your app • Create hooks for UI elements in app • Try out actions • Record / playback actions • Convert UIAutomation JS to Appium code Appium.app • Mac: stable • Windows: under development Monitor Preferences Inspector Recorder Robot support • Bitbeambot Delta-2 • http://www.bitbeam.org • Tapster • https://www.tindie.com/products/hugs/robot-that- plays-angry-birds/ Robot support • Redirects touch actions
    [Show full text]
  • GUI-Guided Test Script Repair for Mobile Apps
    This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TSE.2020.3007664 1 GUI-Guided Test Script Repair for Mobile Apps Minxue Pan, Tongtong Xu, Yu Pei, Zhong Li, Tian Zhang, Xuandong Li Abstract—Graphical User Interface (GUI) testing is widely used to test mobile apps. As mobile apps are frequently updated and need repeated testing, to reduce the test cost, their test cases are often coded as scripts to enable automated execution using test harnesses/tools. When those mobile apps evolve, many of the test scripts, however, may become broken due to changes made to the app GUIs. While it is desirable that the broken scripts get repaired, doing it manually can be preventively expensive if the number of tests need repairing is large. We propose in this paper a novel approach named METER to repairing broken GUI test scripts automatically when mobile apps evolve. METER leverages computer vision techniques to infer GUI changes between two versions of a mobile app and uses the inferred changes to guide the repair of GUI test scripts. Since METER only relies on screenshots to repair GUI tests, it is applicable to apps targeting open or closed source mobile platforms. In experiments conducted on 22 Android apps and 6 iOS apps, repairs produced by METER helped preserve 63.7% and 38.8% of all the test actions broken by the GUI changes, respectively.
    [Show full text]
  • Code Coverage Measurement Framework for Android Devices∗
    Acta Cybernetica 21 (2014) 439–458. Code Coverage Measurement Framework for Android Devices∗ Ferenc Horváth,y Szabolcs Bognár,y Tamás Gergely,y Róbert Rácz,y Árpád Beszédes,y and Vladimir Marinkovicz Abstract Software testing is a very important activity in the software development life cycle. Numerous general black- and white-box techniques exist to achieve different goals and there are a lot of practices for different kinds of soft- ware. The testing of embedded systems, however, raises some very special constraints and requirements in software testing. Special solutions exist in this field, but there is no general testing methodology for embedded systems. One of the goals of the CIRENE project was to fill this gap and define a general testing methodology for embedded systems that could be specialized to different environments. The project included a pilot implementation of this methodology in a specific environment: an Android-based Digital TV receiver (Set-Top-Box). In this pilot, we implemented method level code coverage measurement of Android applications. This was done by instrumenting the applications and creating a framework for the Android device that collected basic infor- mation from the instrumented applications and communicated it through the network towards a server where the data was finally processed. The result- ing code coverage information was used for many purposes according to the methodology: test case selection and prioritization, traceability computation, dead code detection, etc. The resulting methodology and toolset were reused in another project where we investigated whether the coverage information can be used to de- termine locations to be instrumented in order to collect relevant information about software usability.
    [Show full text]
  • How to Make a Successful App Appfutura Annual Report 2016
    HOW TO MAKE A SUCCESSFUL APP APPFUTURA ANNUAL REPORT 2016 Copyright © 2016 by AppFutura appfutura INDEX 01 AppFutura 02 Introduction 04 Time and cost of a mobile app 21 Steps to hire the best mobile app development company 27 Benchmarking an app idea 39 Wireframes, the key to usability 48 Designing mobile apps 64 Apps for kids 85 Game apps 97 Health apps 121 Social apps 136 Wearable apps 146 Mobile app testing 152 Marketing your mobile app 167 Future mobile app development trends 196 Final Considerations 198 Collaborators Copyright © 2016 by AppFutura All rights reserved. This book or any portion thereof may not be reproduced, uploaded, transmitted or used in any manner whatsoever without the prior written permission of the publisher. For permission requests, write to the publisher, addressed “AppFutura Annual Report” at [email protected] Follow us on AppFutura AppFutura Annual Report 2016 appfutura APPFUTURA AppFutura started as a project in the Mobile World Congress of 2013, born out of the needs of mobile app development companies to find projects and the needs of clients to find experts to develop their apps. The platform has now become a worldwide community for mobile app developers where they can meet people or companies looking for the best firms to develop an app project. We offer help so clients reach their best partner. We have more than 15,000 developers and mobile app development companies listed worldwide. On AppFutura, they can find everything they are looking for: verified leads and qualified traffic. AppFutura has so far published over 4,500 mobile app projects in its history.
    [Show full text]
  • An Automated Virtual Security Testing Platform for Android Mobile Apps
    An Automated Virtual Security Testing Platform for Android Mobile Apps Yong Wang College of Business and Information Systems Dakota State University Madison, SD 57042 [email protected] Abstract—This paper proposes an automated virtual multiple of the following techniques, privilege escalation, security testing platform for Android mobile apps. The testing remote control, financial charge, and information collection, platform includes three key components: customizing Android etc. The previous stated techniques provide a malicious OS to include mobile app trace information, creating a virtual attacker with a variety of options to utilize a compromised testing platform using the customized OS, and developing static mobile device. and dynamic analyzing techniques for mobile malware detection. The proposed testing platform is a server-side malware detection Most client-side malware detection tools are based on solution. It can utilize both static and dynamic analysis and is a signatures. However, the signature-based approach can only good compensation to the client-side mobile security software. be used to detect known malware. Google has introduced a Keywords—Android, mobile app, security, malware, detection server-side approach, Bouncer, to detect malicious apps before they hit the Google Play Store [4]. This technique is great for I. INTRODUCTION apps that are downloaded through the Google Play Store, but Mobile devices such as smartphones and tablets have been is disadvantageous for the users who use third party app widely adopted for personal and business purposes. However, stores. A cloud-based mobile malware detection framework is the popularity of the mobile devices also raises many security introduced in [5]. The proposed testing platform in this paper issues and challenges.
    [Show full text]