Combining Logical and Probabilistic Reasoning in Program Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Combining Logical and Probabilistic Reasoning in Program Analysis COMBINING LOGICAL AND PROBABILISTIC REASONING IN PROGRAM ANALYSIS A Dissertation Presented to The Academic Faculty By Xin Zhang In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Computer Science Georgia Institute of Technology December 2017 Copyright c Xin Zhang 2017 COMBINING LOGICAL AND PROBABILISTIC REASONING IN PROGRAM ANALYSIS Approved by: Dr. Mayur Naik, Advisor Dr. Hongseok Yang Department of Computer and Department of Computer Science Information Science Oxford University University of Pennsylvania Dr. Santosh Pande Dr. William Harris School of Computer Science School of Computer Science Georgia Institute of Technology Georgia Institute of Technology Dr. Aditya Nori Microsoft Research Date Approved: August 23, 2017 Knowledge comes, but wisdom lingers. Alfred, Lord Tennyson To my parents. ACKNOWLEDGEMENTS I am forever in debt to my advisor, Mayur Naik, for his support and guidance throughout my Ph.D. study. It is Mayur who brought me to the wonderful world of research. As his first Ph.D. student, I have received enormous amount of attention from Mayur that other students could only dream of. From topic selection to problem solving, formalization to empirical evaluation, writing to presentation, he has coached me heavily in every aspect that is needed to be a researcher. Mayur’s passion and high standards about research will always inspire me to be a better researcher. Besides Mayur, I was fortunate to be mentored by Hongseok Yang and Aditya Nori. I worked with Hongseok closely in the first half of my Ph.D. study, and learnt the most about programming language theories from him. It always amazes me how Hongseok can draw principles and insights from raw and seemingly hacky ideas. I worked with Aditya closely in the second half of my Ph.D. study and benefited greatly from his crisp feedback. Although we only met for one hour a week, this one hour was often the one hour when I learnt the most of the whole week. I would like to thank all my collaborators, but especially Radu Grigore, Ravi Mangal, and Xujie Si. They have helped greatly in projects that eventually led to this thesis. I also had a great fun time working with them. I am also grateful to the rest of my collabora- tors, including Sulekha Kulkarni, Aditya Kamath, Jongse Park, Hadi Esmaeilzadeh, Vasco Manquinho, Mikolas Janota, and Alexey Ignatiev. Bill Harris and Santosh Pande were kind enough to serve on my thesis committee. They and other folks at Georgia Tech have made GT a wonderful place for graduate study. Last but not least, I would thank my parents for the unconditional love and support I have received ever since I can remember. Throughout my life, I have always been encour- aged by them to pursue things that I am passionate about. When facing challenges in life, I always gain strength knowing that they will be there for me. v TABLE OF CONTENTS Acknowledgments . v List of Tables . xi List of Figures . xiii Chapter 1: Introduction . 1 1.1 Motivating Applications . 2 1.1.1 Automated Verification . 3 1.1.2 Interactive Verification . 5 1.1.3 Static Bug Detection . 6 1.2 System Architecture . 8 1.3 Thesis Contributions . 10 1.4 Thesis Organization . 11 Chapter 2: Background . 12 2.1 Constraint-Based Program Analysis . 12 2.2 Datalog . 14 2.3 Markov Logic Networks . 16 Chapter 3: Applications . 20 vi 3.1 Automated Verification . 20 3.1.1 Introduction . 20 3.1.2 Overview . 23 3.1.3 Parametric Dataflow Analyses . 30 3.1.3.1 Abstractions and Queries . 31 3.1.3.2 Problem Statement . 33 3.1.4 Algorithm . 34 3.1.4.1 From Datalog Derivations to Hard Constraints . 35 3.1.4.2 The Algorithm . 37 3.1.4.3 Choosing Good Abstractions via Mixed Hard and Soft Constraints . 38 3.1.5 Empirical Evaluation . 42 3.1.5.1 Evaluation Setup . 43 3.1.5.2 Evaluation Results . 45 3.1.6 Related Work . 51 3.1.7 Conclusion . 53 3.2 Interactive Verification . 53 3.2.1 Introduction . 53 3.2.2 Overview . 56 3.2.3 The Optimum Root Set Problem . 62 3.2.3.1 Declarative Static Analysis . 62 3.2.3.2 Problem Statement . 63 3.2.3.3 Monotonicity . 64 vii 3.2.3.4 NP-Completeness . 64 3.2.4 Interactive Analysis . 65 3.2.4.1 Main Algorithm . 67 3.2.4.2 Soundness . 67 3.2.4.3 Finding an Optimum Root Set . 70 3.2.4.4 From Augmented Datalog to Markov Logic Network . 70 3.2.4.5 Feasible Payoffs . 72 3.2.4.6 Discussion . 74 3.2.5 Instance Analyses . 76 3.2.6 Empirical Evaluation . 78 3.2.6.1 Evaluation Setup . 79 3.2.6.2 Evaluation Results . 81 3.2.7 Related Work . 88 3.2.8 Conclusion . 92 3.3 Static Bug Detection . 92 3.3.1 Introduction . 92 3.3.2 Overview . 95 3.3.3 Analysis Specification . 100 3.3.4 The EUGENE System . 103 3.3.4.1 Online Component of EUGENE: Inference . 104 3.3.4.2 Offline Component of EUGENE: Learning . 105 3.3.5 Empirical Evaluation . 106 3.3.5.1 Evaluation Setup . 106 viii 3.3.5.2 Evaluation Results . 110 3.3.6 Related Work . 117 3.4 Conclusion . 118 Chapter 4: Solver Techniques . 119 4.1 Iterative Lazy-Eager Grounding . 124 4.1.1 Introduction . 124 4.1.2 The IPR Algorithm . 126 4.1.3 Empirical Evaluation . 132 4.1.4 Related Work . 136 4.1.5 Conclusion . 137 4.2 Query-Guided Maximum Satisfiability . 137 4.2.1 Introduction . 137 4.2.2 Example . 139 4.2.3 The Q-MAXSAT Problem . 144 4.2.4 Solving a Q-MAXSAT Instance . 145 4.2.4.1 Implementing an Efficient CHECK Function . 146 4.2.4.2 Efficient Optimality Check via Distinct APPROX Functions 151 4.2.5 Empirical Evaluation . 169 4.2.5.1 Evaluation Setup . 169 4.2.5.2 Evaluation Result . 172 4.2.6 Related Work . 176 4.2.7 Conclusion . 179 ix Chapter 5: Future Directions . 180 Chapter 6: Conclusion . 184 Appendix A: Proofs . 187 A.1 Proofs for Results of Chapter 2 . 187 A.2 Proofs of Results of Chapter 3.1 . 188 A.2.1 Proofs of Theorems 4 and 5 . 188 A.2.2 Proof of Theorem 6 . 195 A.3 Proofs of Results of Chapter 3.2 . 199 A.4 Proofs of Results of Chapter 4.1 . 200 Appendix B: Alternate Use Case of URSA: Combining Two Static Analyses . 205 References . 210 x LIST OF TABLES 3.1 Markov Logic Network encodings of different program analysis applications. 21 3.2 Each iteration (run) eliminates a number of abstractions. Some are elim- inated by analyzing the current Datalog run (within run); some are elimi- nated because of the derivations from the current run interact with deriva- tions from previous runs (across runs). 26 3.3 Benchmark characteristics. All numbers are computed using a 0-CFA call- graph analysis. 42 3.4 Results showing statistics of queries, abstractions, and iterations of our approach (CURRENT) and the baseline approaches (BASELINE) on the pointer analysis. 45 3.5 Results showing statistics of queries, abstractions, and iterations of our ap- proach (CURRENT) and the baseline approaches (BASELINE) on the type- state analysis. 46 3.6 Running time (in seconds) of the Datalog solver in each iteration. 48 3.7 Running time (in seconds) of the Markov Logic Network solver in each iteration. 49 3.8 Benchmark characteristics. Column jAj shows the numbers of alarms. Col- umn jQU j shows the sizes of the universes of potential causes, where k stands for thousands. All the reported numbers except for jAj and jQU j are computed using.
Recommended publications
  • UC Santa Cruz UC Santa Cruz Electronic Theses and Dissertations
    UC Santa Cruz UC Santa Cruz Electronic Theses and Dissertations Title Learning Structured and Causal Probabilistic Models for Computational Science Permalink https://escholarship.org/uc/item/0xf1t2zr Author Sridhar, Dhanya Publication Date 2018 License https://creativecommons.org/licenses/by-nc/4.0/ 4.0 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA SANTA CRUZ LEARNING STRUCTURED AND CAUSAL PROBABILISTIC MODELS FOR COMPUTATIONAL SCIENCE A dissertation submitted in partial satisfaction of the requirements for the degree of DOCTOR OF PHILOSOPHY in COMPUTER SCIENCE by Dhanya Sridhar September 2018 The Dissertation of Dhanya Sridhar is approved: Lise Getoor, Chair Marilyn Walker Kristian Kersting Lori Kletzer Vice Provost and Dean of Graduate Studies Copyright © by Dhanya Sridhar 2018 Table of Contents List of Figures vi List of Tables viii Abstract xii Dedication xiv Acknowledgments xv 1 Introduction 1 1.1 Challenges in Computational Science . .3 1.2 Structured Probabilistic Approaches . .6 1.3 Contributions . 10 2 Preliminaries 13 2.1 Probabilistic Graphical Models . 13 2.1.1 Markov Random Fields . 14 2.1.2 Bayesian Networks . 15 2.1.3 Inference and Learning . 17 2.2 Statistical Relational Learning . 19 2.3 Hinge-loss Markov Random Fields and Probabilistic Soft Logic . 21 3 Modeling Online Debates 24 3.1 Debate Stance Classification . 25 3.2 Online Debate Forums . 29 3.3 Related Work . 31 3.4 Modeling Debate Stance . 34 3.5 PSL Models . 38 3.6 Cost-Penalized Learning . 39 3.7 Experimental Results . 41 iii 3.7.1 Evaluating Modeling Choices .
    [Show full text]
  • Automated Knowledge Base Extension Using Open Information
    AUTOMATED KNOWLEDGE BASE EXTENSION USING OPEN INFORMATION by arnab kumar dutta durgapur, india Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften Fakultät für Wirtschaftsinformatik und Wirtschaftsmathematik der Universität Mannheim Mannheim, Deutschland, 2015 dekan: Prof. Dr. Heinz Jürgen Müller, Universität Mannheim, Deutschland referent: Prof. Dr. Heiner Stuckenschmidt, Universität Mannheim, Deutschland korreferent: Ao. Prof. Dr. Fabian Suchanek, Télécom ParisTech University, France Tag der mündlichen Prüfung: 4 Februar 2016 Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle. As with all matters of the heart, you’ll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking until you find it. Don’t settle. - Steve Jobs (1955 - 2011) ABSTRACT Open Information Extractions (OIE) (like Nell,Reverb) frameworks provide us with domain independent facts in natural language forms containing knowledge from varied sources. Extraction mechanisms for structured knowledge bases (KB) (like DBpedia,Yago) often fail to retrieve such facts due to its resource specific extraction schemes. Hence, the structured KBs can extend themselves by augment- ing their coverage with the facts discovered by OIE systems. This possibility mo- tivates us to integrate these two genres of extractions into one interactive frame- work. In this work, we present a complete, ontology independent, generalized architecture for achieving this integration.
    [Show full text]
  • Unifying Logical and Statistical AI with Markov Logic
    review articles DOI:10.1145/3241978 Markov logic can be used as a general framework for joining logical and statistical AI. BY PEDRO DOMINGOS AND DANIEL LOWD Unifying Logical and Statistical AI with Markov Logic To handle the complexity and uncer- FOR MANY YEARS, the two dominant paradigms in tainty present in most real-world prob- artificial intelligence (AI) have been logical AI and lems, we need AI that is both logical and statistical AI. Logical AI uses first-order logic and statistical, integrating first-order logic related representa tions to capture complex and graphical models. One or the other relationships and knowledge about the world. key insights However, logic-based approaches are often too brittle ˽ Intelligent systems must be able to handle the complexity and uncertainty of the real to handle the uncertainty and noise pres ent in many world. Markov logic enables this by unifying first-order logic and probabilistic graphical applications. Statistical AI uses probabilistic models into a single representation. Many representations such as probabilistic graphical deep architectures are instances of Markov logic. models to capture uncertainty. However, graphical ˽ A extensive suite of learning and models only represent distributions over inference algorithms for Markov logic has been developed, along with open source propositional universes and must be customized to implementations like Alchemy. ˽ Markov logic has been applied to natural handle relational domains. As a result, expressing language understanding, information complex concepts and relationships in graphical models extraction and integration, robotics, social network analysis, computational is often difficult and labor-intensive. biology, and many other areas. 74 COMMUNICATIONS OF THE ACM | JULY 2019 | VOL.
    [Show full text]
  • Human in the Loop Enrichment of Product Graphs with Probabilistic Soft Logic
    Human in the Loop Enrichment of Product Graphs with Probabilistic Soft Logic Marouene Sfar Gandoura Zografoula Vagena Nikolaos Vasiloglou [email protected] [email protected] [email protected] RelationalAI RelationalAI RelationalAI ABSTRACT on the internet in the form of product manuals, product reviews, Product graphs have emerged as a powerful tool for online retailers blogs, social media sites, etc. to enhance product semantic search, catalog navigation, and rec- To tackle data completeness issues and to also take advantage ommendations. Their versatility stems from the fact that they can of the abundance of relevant information on the web, many retail- uniformly store and represent different relationships between prod- ers [4, 13] are currently looking into creating product graphs, i.e. ucts, their attributes, concepts or abstractions etc, in an actionable knowledge graphs that capture all the relevant information about form. Such information may come from many, heterogeneous, dis- each of their products and related entities. parate, and mostly unstructured data sources, rendering the product graph creation task a major undertaking. Our work complements existing efforts on product graph creation, by enabling field experts to directly control the graph completion process. We focus on the subtask of enriching product graphs with product attributes and we employ statistical relational learning coupled with a novel human in the loop enhanced inference workflow based on Probabilistic Soft Logic (PSL), to reliably predict product-attribute relationships. Our preliminary experiments demonstrate the viability, practicality and effectiveness of our approach and its competitiveness comparing with alternative methods. As a by-product, our method generates probabilistic fact validity labels from an originally unlabeled data- base that can subsequently be leveraged by other graph completion methods.
    [Show full text]
  • Hinge-Loss Markov Random Fields and Probabilistic Soft Logic
    Journal of Machine Learning Research 18 (2017) 1-67 Submitted 12/15; Revised 12/16; Published 10/17 Hinge-Loss Markov Random Fields and Probabilistic Soft Logic Stephen H. Bach [email protected] Computer Science Department Stanford University Stanford, CA 94305, USA Matthias Broecheler [email protected] DataStax Bert Huang [email protected] Computer Science Department Virginia Tech Blacksburg, VA 24061, USA Lise Getoor [email protected] Computer Science Department University of California, Santa Cruz Santa Cruz, CA 95064, USA Editor: Luc De Raedt Abstract A fundamental challenge in developing high-impact machine learning technologies is bal- ancing the need to model rich, structured domains with the ability to scale to big data. Many important problem areas are both richly structured and large scale, from social and biological networks, to knowledge graphs and the Web, to images, video, and natural lan- guage. In this paper, we introduce two new formalisms for modeling structured data, and show that they can both capture rich structure and scale to big data. The first, hinge- loss Markov random fields (HL-MRFs), is a new kind of probabilistic graphical model that generalizes different approaches to convex inference. We unite three approaches from the randomized algorithms, probabilistic graphical models, and fuzzy logic communities, showing that all three lead to the same inference objective. We then define HL-MRFs by generalizing this unified objective. The second new formalism, probabilistic soft logic (PSL), is a probabilistic programming language that makes HL-MRFs easy to define using a syntax based on first-order logic. We introduce an algorithm for inferring most-probable variable assignments (MAP inference) that is much more scalable than general-purpose convex optimization methods, because it uses message passing to take advantage of sparse dependency structures.
    [Show full text]
  • Probabilistic Soft Logic Dr
    Probabilistic Soft Logic Dr. Fayyaz ul Amir Afsar Minhas PIEAS Biomedical Informatics Research Lab Department of Computer and Information Sciences Pakistan Institute of Engineering & Applied Sciences PO Nilore, Islamabad, Pakistan http://faculty.pieas.edu.pk/fayyaz/ Adapted and copied from: Probabilistic Soft Logic: A New Framework for Statistical Relational Learning https://pdfs.semanticscholar.org/presentation/2cf1/22badd9491bbd9c1272e20c69cf9bca0af15.pdf Probabilistic Soft Logic: A Scalable Probabilistic Programming Language for Modeling Richly Structured Domains Golnoosh Farnadi, Lise Getoor LINQS Group, UCSC Hinge-Loss Markov Random Fields and Probabilistic Soft Logic (JMLR 2017) Probabilistic soft lofic: A short Introduction CIS 622: Machine Learning in Bioinformatics PIEAS Biomedical Informatics Research Lab IID • Typical Machine learning assumes that examples are IID CIS 622: Machine Learning in Bioinformatics PIEAS Biomedical Informatics Research Lab 2 IID in real-world • A lot of real-world data are not IID • Data points often have relationships amongst them or model relationships between entities – Structured Data – Network Data – Heterogeneous networks • Examples – Social networks (FB) – Document Nets (Wikipedia) – Biological networks – NLP Data – Video Data • Most machine learning is “model independent” – Modeling of causal (cause-effect) relationships • Judea Pearl “Theoretical Impediments to Machine Learning with Seven Sparks from the Causal Revolution” • Judea Pearl “The Book of Why” – How can we model these relationships?
    [Show full text]
  • Collective Spammer Detection in Evolving Multi-Relational Social Networks
    See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/278968259 Collective Spammer Detection in Evolving Multi-Relational Social Networks Conference Paper · August 2015 DOI: 10.1145/2783258.2788606 CITATIONS READS 17 282 4 authors, including: Shobeir Fakhraei Lise Getoor University of Southern California University of Maryland, College Park 16 PUBLICATIONS 169 CITATIONS 238 PUBLICATIONS 9,126 CITATIONS SEE PROFILE SEE PROFILE Some of the authors of this publication are also working on these related projects: Medical Decision Making for individual patients View project All content following this page was uploaded by Shobeir Fakhraei on 26 June 2015. The user has requested enhancement of the downloaded file. Collective Spammer Detection in Evolving Multi-Relational Social Networks Shobeir Fakhraei James Foulds Madhusudana Shashanka ∗ y University of Maryland University of California Santa Cruz, CA, USA if(we) Inc. College Park, MD, USA San Francisco, CA, USA [email protected] [email protected] [email protected] Lise Getoor University of California Santa Cruz, CA, USA [email protected] ABSTRACT Keywords Detecting unsolicited content and the spammers who create Social Networks, Spam, Social Spam, Collective Classifica- it is a long-standing challenge that affects all of us on a daily tion, Graph Mining, Multi-relational Networks, Heteroge- basis. The recent growth of richly-structured social net- neous Networks, Sequence Mining, Tree-Augmented Naive works has provided new challenges and opportunities in the Bayes, k-grams, Hinge-loss Markov Random Fields (HL- spam detection landscape. Motivated by the Tagged.com1 MRFs), Probabilistic Soft Logic (PSL), Graphlab. social network, we develop methods to identify spammers in evolving multi-relational social networks.
    [Show full text]
  • Decoupled Smoothing in Probabilistic Soft Logic
    Decoupled Smoothing in Probabilistic Soft Logic Yatong Chen∗ Bryan Tor∗ [email protected] [email protected] University of California, Santa Cruz University of California, Santa Cruz Santa Cruz, USA Santa Cruz, USA Eriq Augustine Lise Getoor [email protected] [email protected] University of California, Santa Cruz University of California, Santa Cruz Santa Cruz, USA Santa Cruz, USA ABSTRACT phenomenon, known as monophily, where nodes share similarity Node classification in networks is a common graph mining task. In with their neighbors’ neighbors, has also been shown to be useful this paper, we examine how separating identity (a node’s attribute) in identifying unknown labels [1]. and preference (the kind of identities to which a node prefers to link) In this paper, we examine how the notions of identity and pref- is useful for node classification in social networks. Building upon re- erence are useful for node classification in social networks. Identity cent work by Chin et al. (2019), where the separation of identity and refers to a specific attribute of an individual, such as their gender, preference is accomplished through a technique called “decoupled political affiliation, religious belief, and so on; preference refers to smoothing”, we show how models that characterize both identity the tendency of an individual to share a connection with others and preference are able to capture the underlying structure in a having a particular identity. The idea of decoupling identity and network, leading to improved performance in node classification preference was first introduced by Chin et al. (2019). In their work, tasks. Specifically, we use probabilistic soft logic (PSL) [2], a flexible they make specific assumptions on the correlations between an and declarative statistical reasoning framework, to model identity individual’s identity and preference: they assume that the identity and preference.
    [Show full text]
  • Arxiv:1912.08198V1 [Cs.LG] 17 Dec 2019 Ao Hthsbeen a Has Construction
    srlearn: A Python Library for Gradient-Boosted Statistical Relational Models Alexander L. Hayes ProHealth Lab Indiana University Bloomington [email protected] Abstract ready to predict (e.g. classify) new data instances. Al- though the estimation and prediction functions are logically We present srlearn, a Python library for boosted sta- distinguished in two separate protocols, it is generally a sin- tistical relational models. We adapt the scikit-learn in- gle class that implements both a learning algorithm and the terface to this setting and provide examples for how this can be used to express learning and inference problems. model for applying the parameters to new data. The aforementioned standard approach thus comprises configuring model hyperparameters, fitting training data, Introduction and predicting test data. However, scikit-learn has since Traditional machine learning systems have generally developed into a full-fledged ecosystem that also services been built as command line applications or as graph- related functions in the modeling workflow: model selec- ical user interfaces (Hall et al. 2009). Both have advan- tion, hyperparameter tuning, and model validation. Fur- tages, but offer fewer solutions when data acquisi- thermore, multiple offshoots of scikit-learn have emerged tion, preprocessing, and model development must oc- to tackle more specialized challenges, including imbal- cur together. Systems such as scikit-learn, TensorFlow, anced datasets (Lematre, Nogueira, and Aridas 2017), gen- Pyro, and PyTorch solve this problem by embed- eralized linear models (Blondel and Pedregosa 2016), and ding data cleaning and model development as steps metric learning (de Vazelhes et al. 2019). within general-purpose languages (Pedregosa et al.
    [Show full text]
  • Declarative Probabilistic Programming with Datalog
    1 Declarative Probabilistic Programming with Datalog Vince Bar´ any´ , LogicBlox, inc. Balder ten Cate, LogicBlox, inc. Benny Kimelfeld, LogicBlox, inc. and Technion – Israel Institute of Technology Dan Olteanu, LogicBlox, inc. and University of Oxford Zografoula Vagena, LogicBlox, inc. Probabilistic programming languages are used for developing statistical models. They typically consist of two components: a specification of a stochastic process (the prior), and a specification of observations that restrict the probability space to a conditional subspace (the posterior). Use cases of such formalisms include the development of algorithms in machine learning and artificial intelligence. In this article we establish a probabilistic-programming extension of Datalog that, on the one hand, allows for defining a rich family of statistical models, and on the other hand retains the fundamental properties of declarativity. Our proposed extension provides mechanisms to include common numerical probability functions; in particular, conclusions of rules may contain values drawn from such functions. The semantics of a program is a probability distribution over the possible outcomes of the input database with respect to the program. Observations are naturally incorporated by means of integrity constraints over the extensional and intensional relations. The resulting semantics is robust under different chases and invariant to rewritings that preserve logical equivalence. CCS Concepts: Theory of computation ! Constraint and logic programming; Database query languages (principles);r Mathematics of computing ! Probabilistic representations; General Terms: Languages,Theoryr Additional Key Words and Phrases: Chase, Datalog, declarative, probability measure space, probabilistic programming 1. INTRODUCTION Languages for specifying general statistical models are commonly used in the develop- ment of machine learning and artificial intelligence algorithms for tasks that involve inference under uncertainty.
    [Show full text]
  • Collective Spammer Detection in Evolving Multi-Relational Social Networks
    Collective Spammer Detection in Evolving Multi-Relational Social Networks Shobeir Fakhraei James Foulds Madhusudana Shashanka ∗ y University of Maryland University of California Santa Cruz, CA, USA if(we) Inc. College Park, MD, USA San Francisco, CA, USA [email protected] [email protected] [email protected] Lise Getoor University of California Santa Cruz, CA, USA [email protected] ABSTRACT Keywords Detecting unsolicited content and the spammers who create Social Networks, Spam, Social Spam, Collective Classifica- it is a long-standing challenge that affects all of us on a daily tion, Graph Mining, Multi-relational Networks, Heteroge- basis. The recent growth of richly-structured social net- neous Networks, Sequence Mining, Tree-Augmented Naive works has provided new challenges and opportunities in the Bayes, k-grams, Hinge-loss Markov Random Fields (HL- spam detection landscape. Motivated by the Tagged.com1 MRFs), Probabilistic Soft Logic (PSL), Graphlab. social network, we develop methods to identify spammers in evolving multi-relational social networks. We model a so- cial network as a time-stamped multi-relational graph where 1. INTRODUCTION vertices represent users, and edges represent different ac- Unsolicited or inappropriate messages sent to a large num- tivities between them. To identify spammer accounts, our ber of recipients, known as \spam", can be used for various approach makes use of structural features, sequence mod- malicious purposes, including phishing and virus attacks, elling, and collective reasoning. We leverage relational se- marketing of objectionable materials and services, and com- quence information using k-gram features and probabilistic promising the reputation of a system. From printed ad- modelling with a mixture of Markov models.
    [Show full text]
  • NIKHIL KINI ([email protected]) ​ ​ ​ (951) 823-3852 Nikhilnkini.Wordpress.Com Github.Com/Nkini Linkedin.Com/In/Nikhil-Kini-753B7751
    NIKHIL KINI ([email protected]) ​ ​ ​ (951) 823-3852 nikhilnkini.wordpress.com github.com/nkini linkedin.com/in/nikhil-kini-753b7751 Main interests: ● Machine learning, Natural Language Processing, and Distributed Systems. I specialize in Entity Resolution, Probabilistic Graphical Models, and Statistical Relational Learning. Technical skills: ● Python, Java, JavaScript, the Numpy stack, scikit-learn, Maven, NLTK (Expert) ● C, C++, Matlab, PHP, Ruby, Apache Spark, Android SDK, Theano (Prior Experience) Education: ● Master’s degree in Computer Science at UC Santa Cruz, 2017. GPA: 3.47. ● Bachelor of Engineering in Information Technology, Mumbai University, 2009. Work Experience ● Over six years in research and development, of which three years in Machine Learning. Adobe Systems Inc. Data Scientist Intern (June 2016 - March 2017) - 10 months ​ ● Visitor Stitching on Cross-Device Web Logs using Probabilistic Soft Logic ● Resolved cookies belonging to the same user, handling millions of sparsely labeled cookies ● Implemented baseline Support Vector Machine (SVM) models (in scikit-learn) ● Implemented relational classification models using Probabilistic Soft Logic ● Publication: Probabilistic Visitor Stitching on Cross-Device Web Logs, WWW 2017 University of California at Santa Cruz Graduate Student Researcher (September 2016 - March 2017) ​ ● Research on transitivity in Hinge-loss Markov Random Fields ● Improving scalability for Entity Resolution in Probability Soft Logic Teaching Assistant for CMPS 128 Distributed Systems (Spring 2016) ​ ​ ​ ● Explores fundamental as well as emerging, practical as well as theoretical topics ● Co-designed and implemented final project for the class – a distributed Key Value Store Teaching assistant for CMPS 183 Web Applications (Winter 2016) ​ ​ ● Course teaches Web2py (Python), Software development using Scrum. ● Responsibilities include mentoring 10 teams of 5 students each.
    [Show full text]