Advancements in Graph Neural Networks: Pgnns, Pretraining, and OGB Jure Leskovec

Total Page:16

File Type:pdf, Size:1020Kb

Advancements in Graph Neural Networks: Pgnns, Pretraining, and OGB Jure Leskovec Advancements in Graph Neural Networks: PGNNs, Pretraining, and OGB Jure Leskovec Includes joint work with W. Hu, J. You, M. Fey, Y. Dong, B. Liu, M. Catasta, K. Xu, S. Jegelka, M. Zitnik, P. Liang, V. Pande Networks of Interactions Social networks Knowledge graphs Biological networks Complex Systems Molecules Code Jure Leskovec, Stanford University 2 Representation Learning in Graphs … z Predictions: Node labels, Link predicition, Graph Input: Network classification Jure Leskovec, Stanford University 3 Why is it Hard? Networks are complex! § Arbitrary size and complex topological structure (i.e., no spatial locality like grids) vs. Text Networks Images § No fixed node ordering or reference point § Often dynamic and have multimodal features Jure Leskovec, Stanford University 4 Graph Neural Networks A TARGET NODE B B C A A C B A C F E D F E D INPUT GRAPH A Each node defines a computation graph § Each edge in this graph is a transformation/aggregation function Scarselli et al. 2005. The Graph Neural Network Model. IEEE Transactions on Neural Networks. Jure Leskovec, Stanford University 5 Graph Neural Networks A TARGET NODE B B C A A C B A C F E D F E D INPUT GRAPH A Neural networks Intuition: Nodes aggregate information from their neighbors using neural networks Inductive Representation Learning on LargeJure GraphsLeskovec, .Stanford W. Hamilton, University R. Ying, J. Leskovec. NIPS, 2017. 6 Idea: Aggregate Neighbors Intuition: Network neighborhood defines a computation graph Every node defines a computation graph based on its neighborhood! Can be viewed as learning a generic linear combination of graph low-pass and high-pass operators [Bronstein et al., 2017] Jure Leskovec, Stanford University 7 [NIPS ‘17] Our Approach: GraphSAGE !( # ℎ ) ' ∈ ) * % ?… combine xA @… aggregate xC ? @ Update for node +: (-./) - - - - ℎ, = 1 2 ℎ, , 4# 1(5 ℎ% ) %∈) , 9 + 1=> level Transform *’s own Transform and aggregate embedding of node * embedding from level 9 embeddings of neighbors : 6 § ℎ, = features 7, of node *, 1(⋅) is a sigmoid activation function Jure Leskovec, Stanford University 8 [NIPS ‘17] GraphSAGE: Training § Aggregation parameters are shared for all nodes § Number of model parameters is independent of |V| § Can use different loss functions: * § Classification/Regression: ℒ ℎ% = '% − ) ℎ% § Pairwise Loss: ℒ ℎ%, ℎ, = max(0, 1 − 3456 ℎ%, ℎ, ) B shared parameters (9) (9) A C 8 , : F D shared parameters E INPUT GRAPH Compute graph for node A Compute graph for node B Jure Leskovec, Stanford University 9 Inductive Capability zu generate embedding train with a snapshot new node arrives for new node Even for nodes we Jure Leskovec, Stanford University never trained on! 10 [NeurIPS ‘18] DIFFPOOL: Pooling for GNNs Don’t just embed individual nodes. Embed an entire graph. Problem: Learn how to hierarchical pool the nodes to embed the entire graph Our solution: DIFFPOOL § Learns hierarchical pooling strategy § Sets of nodes are pooled hierarchically § Soft assignment of nodes to next-level nodes Hierarchical Graph Representation LearningJure with Leskovec, Differentiable Stanford University Pooling. R. Ying, et al. NeurIPS 2018. 11 [NeurIPS ‘18] DIFFPOOL: Pooling for GNNs Don’t just embed individual nodes. Embed the entire graph.How expressive are Problem:GraphLearn howNeural to hierarchical Networks? pool the nodes to embed the entire graph Our solution: DIFFPOOL § Learns hierarchical pooling strategy § Sets of nodes are pooled hierarchically § Soft assignment of nodes to next-level nodes Hierarchical Graph Representation LearningJure with Leskovec, Differentiable Stanford University Pooling. R. Ying, et al. NeurIPS 2018. 12 How expressive are GNNs? Theoretical framework: Characterize GNN’s discriminative power: § Characterize upper bound of the discriminative power of GNNs GNN tree: § Propose a maximally powerful GNN § Characterize discriminative power of popular GNNs Aggregation: How Powerful are Graph Neural Networks?Jure Leskovec,K. Xu,Stanford et University al. ICLR 2019. Multiset13 ! Key Insight: Rooted Subtrees Assume no node features, then single nodes cannot be distinguished but rooted trees can be distinguished: Graph: GNN distinguishes: The most powerful GNN is able to distinguish rooted subtrees of different structure GNN can distinguish blue D and violet E but not violet E and pink F node Jure Leskovec, Stanford University 14 Discriminative Power of GNNs Multiset " Idea: If GNN aggregation is injective, then different multisets are distinguished and GNN can capture subtree structure Theorem: Injective multiset function ! can be written as: ! " = $(∑'∈) *(+)) Funcs. $, * are based on universal approximation thm. Consequence: Sum aggregator is the most expressive! Jure Leskovec, Stanford University 15 Under review as a conference paper at ICLR 2019 > > Input sum - multiset mean - distribution max - set Figure 2: RankingPower by expressive powerof for Aggregators sum, mean and max-pooling aggregators over a multiset. Left panel shows the input multiset and the three panels illustrate the aspects of the multiset a given aggregator is able to capture: sum captures the full multiset, mean captures the proportion/distribution of elements of a given type, and the max aggregator ignores multiplicities (reduces the multiset to a simpleFailure set). cases for mean and max agg. A A A A A vs. vs. A vs. (a) Mean and Max both fail (b) Max fails (c) Mean and Max both fail UnderFigure review 3: as Examplesa conference of simple paper at graph ICLR structures 2019 that mean and max-pooling aggregators fail to distinguish. Figure 2 gives reasoning about how different aggregators “compress” different graph structures/multisets.Ranking by discriminative power existing GNNs instead use a 1-layer perceptron σ W (Duvenaud et al., 2015; Kipf & Welling, 2017; Zhang etA al., 2018), a linear mapping followedA by◦ a non-linear activationA function suchA as a ReLU. Such 1-layer mappings are examples of Generalized> Linear Models (Nelder> & Wedderburn, 1972). Therefore, we are interested in understanding whether 1-layer perceptrons are enough for graph learning. Lemma 7 suggests that there are indeed network neighborhoods (multisets) that models with 1-layerInput perceptrons can neversum distinguish. - multiset mean - distribution max - set Jure Leskovec, Stanford University 16 FigureLemma 2: Ranking 7. byThere expressive exist finite power multisets for sum,X mean1 = andX2 max-poolingso that for aggregators any linear over mapping a multiset.W , ReLU (Wx)= ReLU (Wx) . 6 Left panelx showsX1 the input multisetx X2 and the three panels illustrate the aspects of the multiset a given 2 2 aggregatorP is able to capture: sumP captures the full multiset, mean captures the proportion/distribution of elementsThe main of a idea given of type, the proof and the for max Lemma aggregator 7 is that ignores 1-layer perceptrons multiplicities can (reduces behave much the multiset like linear to a simplemappings, set). so the GNN layers degenerate into simply summing over neighborhood features. Our proof builds on the fact that the bias term is lacking in the linear mapping. With the bias term and sufficiently large output dimensionality, 1-layer perceptrons might be able to distinguish different multisets. Nonetheless, unlike models using MLPs, the 1-layer perceptron (even with the bias term) is not a universal approximator of multiset functions. Consequently, even if GNNs with 1-layer perceptrons can embed different graphs to different locations to some degree, such embeddings may notvs. adequately capture structural similarity,vs. and can be difficult for simple classifiers, e.g., linear classifiers, to fit. In Section 7, we will empirically see that GNNs with 1-layervs. perceptrons, when applied to graph classification, sometimes severely underfit training data and often underperform (a) MeanGNNs and with Max MLPs both fail in terms of test accuracy.(b) Max fails (c) Mean and Max both fail Figure 3: Examples of simple graph structures that mean and max-pooling aggregators fail to 5.2 STRUCTURES THAT CONFUSE MEAN AND MAX-POOLING distinguish. Figure 2 gives reasoning about how different aggregators “compress” different graph structures/multisets. What happens if we replace the sum in h (X)= x X f(x) with mean or max-pooling as in GCN and GraphSAGE? Mean and max-pooling aggregators2 are still well-defined multiset functions because they are permutation invariant. But, they are notPinjective. Figure 2 ranks the three aggregators by existing GNNs instead use a 1-layer perceptron σ W (Duvenaud et al., 2015; Kipf & Welling, 2017; their representational power, and Figure 3 illustrates◦ pairs of structures that the mean and max-pooling Zhangaggregators et al., 2018), fail a to linear distinguish. mapping Here, followed node colors by a denote non-linear different activation node features, function and such we assume as a ReLU. the Such 1-layerGNNs aggregate mappings neighbors are examples first before of Generalized combining Linearthem with Models the central (Nelder node. & Wedderburn, 1972). Therefore, we are interested in understanding whether 1-layer perceptrons are enough for graph learning.In Figure Lemma 3a, 7 every suggests node that has there the same are indeed feature networka and f( neighborhoodsa) is the same across (multisets) all nodes that (for models any with 1-layerfunction perceptronsf). When performing can never distinguish.neighborhood aggregation, the mean or maximum over f(a) remains f(a) and, by induction, we always obtain the same node representation everywhere.
Recommended publications
  • Nber Working Paper Series Human Decisions And
    NBER WORKING PAPER SERIES HUMAN DECISIONS AND MACHINE PREDICTIONS Jon Kleinberg Himabindu Lakkaraju Jure Leskovec Jens Ludwig Sendhil Mullainathan Working Paper 23180 http://www.nber.org/papers/w23180 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts Avenue Cambridge, MA 02138 February 2017 We are immensely grateful to Mike Riley for meticulously and tirelessly spearheading the data analytics, with effort well above and beyond the call of duty. Thanks to David Abrams, Matt Alsdorf, Molly Cohen, Alexander Crohn, Gretchen Ruth Cusick, Tim Dierks, John Donohue, Mark DuPont, Meg Egan, Elizabeth Glazer, Judge Joan Gottschall, Nathan Hess, Karen Kane, Leslie Kellam, Angela LaScala-Gruenewald, Charles Loeffler, Anne Milgram, Lauren Raphael, Chris Rohlfs, Dan Rosenbaum, Terry Salo, Andrei Shleifer, Aaron Sojourner, James Sowerby, Cass Sunstein, Michele Sviridoff, Emily Turner, and Judge John Wasilewski for valuable assistance and comments, to Binta Diop, Nathan Hess, and Robert Webberfor help with the data, to David Welgus and Rebecca Wei for outstanding work on the data analysis, to seminar participants at Berkeley, Carnegie Mellon, Harvard, Michigan, the National Bureau of Economic Research, New York University, Northwestern, Stanford and the University of Chicago for helpful comments, to the Simons Foundation for its support of Jon Kleinberg's research, to the Stanford Data Science Initiative for its support of Jure Leskovec’s research, to the Robert Bosch Stanford Graduate Fellowship for its support of Himabindu Lakkaraju and to Tom Dunn, Ira Handler, and the MacArthur, McCormick and Pritzker foundations for their support of the University of Chicago Crime Lab and Urban Labs. The main data we analyze are provided by the New York State Division of Criminal Justice Services (DCJS), and the Office of Court Administration.
    [Show full text]
  • Himabindu Lakkaraju
    Himabindu Lakkaraju Contact Information 428 Morgan Hall Harvard Business School Soldiers Field Road Boston, MA 02163 E-mail: [email protected] Webpage: http://web.stanford.edu/∼himalv Research Interests Transparency, Fairness, and Safety in Artificial Intelligence (AI); Applications of AI to Criminal Justice, Healthcare, Public Policy, and Education; AI for Decision-Making. Academic & Harvard University 11/2018 - Present Professional Postdoctoral Fellow with appointments in Business School and Experience Department of Computer Science Microsoft Research, Redmond 5/2017 - 6/2017 Visiting Researcher Microsoft Research, Redmond 6/2016 - 9/2016 Research Intern University of Chicago 6/2014 - 8/2014 Data Science for Social Good Fellow IBM Research - India, Bangalore 7/2010 - 7/2012 Technical Staff Member SAP Research, Bangalore 7/2009 - 3/2010 Visiting Researcher Adobe Systems Pvt. Ltd., Bangalore 7/2007 - 7/2008 Software Engineer Education Stanford University 9/2012 - 9/2018 Ph.D. in Computer Science Thesis: Enabling Machine Learning for High-Stakes Decision-Making Advisor: Prof. Jure Leskovec Thesis Committee: Prof. Emma Brunskill, Dr. Eric Horvitz, Prof. Jon Kleinberg, Prof. Percy Liang, Prof. Cynthia Rudin Stanford University 9/2012 - 9/2015 Master of Science (MS) in Computer Science Advisor: Prof. Jure Leskovec Indian Institute of Science (IISc) 8/2008 - 7/2010 Master of Engineering (MEng) in Computer Science & Automation Thesis: Exploring Topic Models for Understanding Sentiments Expressed in Customer Reviews Advisor: Prof. Chiranjib
    [Show full text]
  • Human Decisions and Machine Predictions*
    HUMAN DECISIONS AND MACHINE PREDICTIONS* Jon Kleinberg Himabindu Lakkaraju Jure Leskovec Jens Ludwig Sendhil Mullainathan August 11, 2017 Abstract Can machine learning improve human decision making? Bail decisions provide a good test case. Millions of times each year, judges make jail-or-release decisions that hinge on a prediction of what a defendant would do if released. The concreteness of the prediction task combined with the volume of data available makes this a promising machine-learning application. Yet comparing the algorithm to judges proves complicated. First, the available data are generated by prior judge decisions. We only observe crime outcomes for released defendants, not for those judges detained. This makes it hard to evaluate counterfactual decision rules based on algorithmic predictions. Second, judges may have a broader set of preferences than the variable the algorithm predicts; for instance, judges may care specifically about violent crimes or about racial inequities. We deal with these problems using different econometric strategies, such as quasi-random assignment of cases to judges. Even accounting for these concerns, our results suggest potentially large welfare gains: one policy simulation shows crime reductions up to 24.7% with no change in jailing rates, or jailing rate reductions up to 41.9% with no increase in crime rates. Moreover, all categories of crime, including violent crimes, show reductions; and these gains can be achieved while simultaneously reducing racial disparities. These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions; specifying the scope of payoff functions; and constructing unbiased decision counterfactuals.
    [Show full text]
  • Graph Evolution: Densification and Shrinking Diameters
    Graph Evolution: Densification and Shrinking Diameters JURE LESKOVEC Carnegie Mellon University JON KLEINBERG Cornell University and CHRISTOS FALOUTSOS Carnegie Mellon University How do real graphs evolve over time? What are normal growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network or in a very small number of snapshots; these include heavy tails for in- and out-degree distributions, communities, small-world phenomena, and others. However, given the lack of information about network evolution over long periods, it has been hard to convert these findings into statements about trends over time. Here we study a wide range of real graphs, and we observe some surprising phenomena. First, most of these graphs densify over time with the number of edges growing superlinearly in the number of nodes. Second, the average distance between nodes often shrinks over time in contrast to the conventional wisdom that such distance parameters should increase slowly as a function of the number of nodes (like O(log n)orO(log(log n)). Existing graph generation models do not exhibit these types of behavior even at a qualitative level. We provide a new graph generator, based on a forest fire spreading process that has a simple, intuitive justification, requires very few parameters (like the flammability of nodes), and produces graphs exhibiting the full range of properties observed both in prior work and in the present study. This material is based on work supported by the National Science Foundation under Grants No. IIS-0209107 SENSOR-0329549 EF-0331657IIS-0326322 IIS- 0534205, CCF-0325453, IIS-0329064, CNS-0403340, CCR-0122581; a David and Lucile Packard Foundation Fellowship; and also by the Pennsylvania Infrastructure Technology Alliance (PITA), a partnership of Carnegie Mellon, Lehigh University and the Commonwealth of Pennsylvania’s Department of Community and Economic Development (DCED).
    [Show full text]
  • CURRICULUM Vitae Jure LESKOVEC
    CURRICULUM vitae Jure LESKOVEC Department of Computer Science Phone: (650) 723 - 2273 Stanford University Cell: (412) 478 - 8329 353 Serra Mall E-mail: [email protected] Stanford CA 93405 http://www.cs.stanford.edu/∼jure Positions held Stanford University, Stanford , CA. Assistant Professor. Department of Computer Science. August 2009. Cornell University, Ithaca, NY. Postdoctoral researcher. Department of Computer Science. September 2008 – July 2009. Education Carnegie Mellon University, Pittsburgh, PA. Ph.D. in Computational and Statistical Learning, Machine Learning Department, School of Computer Science. September 2008. University of Ljubljana, Slovenia. Diploma [B.Sc.] in Computer Science (Summa Cum Laude), May 2004. Research Applied machine learning and large-scale data mining, focusing on the analysis and interests modeling of large real-world networks. Academic ACM SIGKDD Dissertation Award 2009 honors Best Research Paper Award at ASCE Journal of Water Resources Planning and Man- agement Engineering, 2009. Best student paper award at 13th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD 2007. Best solution of Battle of the Water Sensor Networks (BWSN) competition on sensor placement in water distribution networks, 2007. Microsoft Graduate Research Fellowship, 2006 - 2008. Best research paper award at 11th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD 2005. Slovenian Academy of Arts and Sciences Fellowship, 2004. University of Ljubljana Preserenˇ Award (highest university award for students given to 10 out of 40,000 sutdents), 2004. Winner of the KDD Cup 2003 on estimating the download rate of scientific articles on Arxiv. KDD Cup is a data mining competition held in conjunction with the Annual ACM SIGKDD Conference.
    [Show full text]
  • Jure Leskovec
    Jure Leskovec Computer Science Department Stanford University Includes joint work with Deepay Chakrabarti, Anirban Dasgupta, Christos Faloutsos, Jon Kleinberg, Kevin Lang and Michael Mahoney Workshop on Complex Networks in Information & Knowledge Management, CIKM 2009 Interaction graph model of networks: . Nodes represent “entities” . Edges represent “interactions” between pairs of entities Thinking of data in a form of a network makes the difference: . Google realized web-pages are connected . Collective classification, targeting Jure Leskovec, CNIKM '09 2 Online social networks Collaboration networks Systems biology networks arXiv DBLP Communication networks Web graph & citation networks Internet Jure Leskovec, CNIKM '09 3 Complex networks as phenomena, not just designed artifacts What are the common patterns that emerge? Flickr social network Scale free networks n= 584,207, m=3,555,115 many more hubs than expected Jure Leskovec, CNIKM '09 4 Network data spans many orders of magnitude: . 436-node network of email exchange over 3-months at corporate research lab [Adamic-Adar, SocNets ‘03] . 43,553-node network of email exchange over 2 years at a large university [Kossinets-Watts, Science ‘06] . 4.4-million-node network of declared friendships on a blogging community [Backstrom et at., KDD ‘06] . 240-million-node network of all IM communication over a month on Microsoft Instant Messenger [Leskovec-Horvitz, WWW ‘08] Jure Leskovec, CNIKM '09 5 [Leskovec et al. KDD 05] How do network properties scale with the size of the network? Citations Citations ) a=1.6 E(t diameter N(t) time Densification Shrinking diameter Average degree increases Path lengths get shorter Jure Leskovec, CNIKM '09 6 What do we hope to achieve by analyzing networks? .
    [Show full text]
  • Real Time Traffic Congestion Prediction and Mitigation at the City Scale
    Real Time Traffic Congestion Prediction and Mitigation at the City Scale John Shen Carnegie Mellon University https://orcid.org/0000-0003-4588-2775 ABHINAV JAUHRI, Carnegie Mellon, United States, https://orcid.org/0000-0002-9695-9261 BRAD STOCKS, Carnegie Mellon, United States JIAN HUI LI, Intel Corp., United States KOICHI YAMADA, Intel Corp., United States JOHN PAUL SHEN, Carnegie Mellon, United States FINAL RESEARCH REPORT Contract # 69A3551747111 DISCLAIMER The contents of this report reflect the views of the authors, who are responsible for the facts and the accuracy of the information presented herein. This document is disseminated under the sponsorship of the U.S. Department of Transportation’s University Transportation Centers Program, in the interest of information exchange. The U.S. Government assumes no liability for the contents or use thereof. 1 Generating Realistic Ride-Hailing Data Sets Using GANs ABHINAV JAUHRI, Carnegie Mellon, United States BRAD STOCKS, Carnegie Mellon, United States JIAN HUI LI, Intel Corp., United States KOICHI YAMADA, Intel Corp., United States JOHN PAUL SHEN, Carnegie Mellon, United States This paper focuses on the synthetic generation of human mobility data in urban areas. We present a novel application of Generative Adversarial Networks (GANs) for modeling and generating human mobility data. We leverage actual ride requests from ride sharing/hailing services from four major cities to train our GANs model. Our model captures the spatial and temporal variability of the ride-request patterns observed for all four cities over a typical week. Previous works have characterized the spatial and temporal properties of human mobility data sets using the fractal dimensionality and the densification power law, respectively, which we utilize to validate our GANs-generated synthetic data sets.
    [Show full text]
  • Graphs Over Time: Densification Laws, Shrinking Diameters and Possible
    Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations ∗ Jure Leskovec Jon Kleinberg Christos Faloutsos Carnegie Mellon University Cornell University Carnegie Mellon University [email protected] [email protected] [email protected] duces graphs exhibiting the full range of properties observed both in prior work and in the present study. ABSTRACT How do real graphs evolve over time? What are “normal” Categories and Subject Descriptors growth patterns in social, technological, and information H.2.8 [Database Management]: Database Applications – networks? Many studies have discovered patterns in static Data Mining graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these in- General Terms clude heavy tails for in- and out-degree distributions, com- munities, small-world phenomena, and others. However, Measurement, Theory given the lack of information about network evolution over long periods, it has been hard to convert these findings into Keywords statements about trends over time. Here we study a wide range of real graphs, and we observe densification power laws, graph generators, graph mining, some surprising phenomena. First, most of these graphs heavy-tailed distributions, small-world phenomena densify over time, with the number of edges growing super- linearly in the number of nodes. Second, the average dis- 1. INTRODUCTION tance between nodes often shrinks over time, in contrast In recent years, there has been considerable interest in to the conventional wisdom that such distance parameters graph structures arising in technological, sociological, and should increase slowly as a function of the number of nodes scientific settings: computer networks (routers or autonomous (like O(log n)orO(log(log n)).
    [Show full text]
  • Integrating Transductive and Inductive Embeddings Improves Link
    Integrating Transductive And Inductive Embeddings Improves Link Prediction Accuracy Chitrank Gupta∗ Yash Jain∗ Abir De† Soumen Chakrabarti IIT Bombay India ABSTRACT network science and machine learning [8–11, 20, 26, 29] starting In recent years, inductive graph embedding models, viz., graph from the seminal papers by Adamic and Adar [2] and Liben-Nowell neural networks (GNNs) have become increasingly accurate at and Kleinberg [15]. link prediction (LP) in online social networks. The performance of such networks depends strongly on the input node features, 1.1 Prior work and limitations which vary across networks and applications. Selecting appropri- In recent years, there has been a flurry of deep learning models ate node features remains application-dependent and generally which have shown a great potential in predicting links in online an open question. Moreover, owing to privacy and ethical issues, social networks. Such models learn node embedding vectors which use of personalized node features is often restricted. In fact, many compress sparse, high dimensional information about node neigh- publicly available data from online social network do not contain borhoods in the graph into low dimensional dense vectors. These any node features (e.g., demography). In this work, we provide a embedding methods predominantly follow two approaches. The comprehensive experimental analysis which shows that harness- first approach consists of transductive neural models which learn ing a transductive technique (e.g., Node2Vec) for obtaining initial the embedding of each node separately by leveraging matrix fac- node representations, after which an inductive node embedding torization based models, e.g., Node2Vec [7], DeepWalk [18], etc. technique takes over, leads to substantial improvements in link Such models suffer from two major limitations, viz., they cannot prediction accuracy.
    [Show full text]
  • Learning Cost-Effective and Interpretable Treatment Regimes
    Learning Cost-Effective and Interpretable Treatment Regimes Himabindu Lakkaraju Cynthia Rudin Stanford University Duke University Abstract If Spiro-Test=Pos and Prev-Asthma=Yes and Cough=High then C Else if Spiro-Test=Pos and Prev-Asthma =No then Q Decision makers, such as doctors and judges, make crucial decisions such as recommending Else if Short-Breath =Yes and Gender=F and Age≥ 40 and Prev-Asthma=Yes then C treatments to patients, and granting bail to de- Else if Peak-Flow=Yes and Prev-RespIssue=No and Wheezing =Yes, then Q fendants on a daily basis. Such decisions typi- Else if Chest-Pain=Yes and Prev-RespIssue =Yes and Methacholine =Pos then C cally involve weighing the potential benefits of Else Q taking an action against the costs involved. In this work, we aim to automate this task of learn- Figure 1: Regime for treatment recommendations for ing cost-effective, interpretable and actionable asthma patients output by our framework;Q refers to treatment regimes. We formulate this as a prob- milder forms of treatment used for quick-relief, andC lem of learning a decision list – a sequence of corresponds to more intense treatments such as controller if-then-else rules – that maps characteristics of drugs (C is higher cost thanQ); Attributes in blue are least subjects (eg., diagnostic test results of patients) expensive. to treatments. This yields an end-to-end individ- ualized policy for tests and treatments. We pro- of medical tests etc.). For instance, a doctor first diagnoses pose a novel objective to construct a decision list the patient’s condition by studying the patient’s medical which maximizes outcomes for the population, history and ordering a set of relevant tests that are cru- and minimizes overall costs.
    [Show full text]
  • Curriculum Vitae
    Curriculum Vitae Marinka Zitnik Department of Biomedical Informatics [email protected] Harvard University +1 (650)-308-6763 10 Shattuck Street http://scholar.harvard.edu/marinka Boston, MA 02115, USA Research Machine learning and large-scale data science, focusing on new methods for the analysis, fusion, and learning on Interests large graphs with applications to biology, genomics, and medicine. Current Harvard University, USA December 2019- Position Assistant Professor, Department of Biomedical Informatics Associate Member, Broad Institute of MIT and Harvard Education Stanford University, USA 2016-2019 Postdoctoral Research Scholar Department of Computer Science Advisor: Prof. Jure Leskovec Biohub Postdoctoral Research Fellow University of Ljubljana, Slovenia 2012-2015 Ph.D. in Computer Science Department of Computer and Information Science Advisor: Prof. Blaz Zupan Committee: Peter Semrl, Igor Kononenko, Saso Dzeroski, Florian Markowetz summa cum laude (GPA 10.00/10) Jozef Stefan Golden Emblem Nominated for the Best European Doctoral Dissertation in Artificial Intelligence Stanford University, Department of Computer Science, USA 2014 Research Fellow with Prof. Jure Leskovec Baylor College of Medicine, Department of Molecular and Human Genetics, USA 2013-2014 Predoctoral Fellow with Prof. Gad Shaulsky and Prof. Adam Kuspa Imperial College London, Department of Computing, UK 2012 Research Student with Prof. Natasa Przulj University of Toronto, Donnelly Centre for Cellular and Biomolecular Research, Canada 2012 Research Student with
    [Show full text]
  • Lecture 8: Evolution (1/1)
    Lecture 8: Evolution (1/1) How does time modify large networks? (i.e., how will FB look in 10 years?) COMS 4995-1: Introduction to Social Networks Thursday, September 22nd 1 Outline * Why is that important? * What we could expect from models so far? * What is empirically observed? o Densification, shrinking diameter * How can we explain and understand it? 2 Motivation * So far, we have only analyzed static graphs o But many social properties are dynamic o One may think that social graphs are results of dynamics going on (see part II of the lecture) * Understanding evolution has direct implications o Are graph generators/models accurate o Extrapolation (what if the network increases?) o Anomaly detection (see part III of the lecture) 3 Outline * Why is that important? * What we could expect from models so far? * What is empirically observed? o Densification, shrinking diameter * How can we explain and understand it? 4 Previous models * Arrival of new nodes: o Users joining Facebook, G+, etc. from invitation o Citation/Collaboration: new paper/movie/webpage o Communication Netw.: new routers, new subscribers * These nodes connect to the current graph: o Creating new connections, citations, links, etc. * Let the process runs n steps, look at final result! 5 Previous assumptions * Degree = constant or slowly varying with size o Typically fixed using the original graph to model o Assume that even if population grows large, each node remains with a finite local neighborhood * Distance = slowly growing with size o With constant degree, should be ~ log(N) o Explains the “small-world” phenomenon 6 Example 1: Unif.
    [Show full text]