Relational Machine Learning Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

Relational Machine Learning Algorithms Relational Machine Learning Algorithms by Alireza Samadianzakaria Bachelor of Science, Sharif University of Technology, 2016 Submitted to the Graduate Faculty of the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Pittsburgh 2021 UNIVERSITY OF PITTSBURGH DEPARTMENT OF COMPUTER SCIENCE This dissertation was presented by Alireza Samadianzakaria It was defended on July 7, 2021 and approved by Dr. Kirk Pruhs, Department of Computer Science, University of Pittsburgh Dr. Panos Chrysanthis, Department of Computer Science, University of Pittsburgh Dr. Adriana Kovashka, Department of Computer Science, University of Pittsburgh Dr. Benjamin Moseley, Tepper School of Business, Carnegie Mellon University ii Copyright c by Alireza Samadianzakaria 2021 iii Relational Machine Learning Algorithms Alireza Samadianzakaria, PhD University of Pittsburgh, 2021 The majority of learning tasks faced by data scientists involve relational data, yet most standard algorithms for standard learning problems are not designed to accept relational data as input. The standard practice to address this issue is to join the relational data to create the type of geometric input that standard learning algorithms expect. Unfortunately, this standard practice has exponential worst-case time and space complexity. This leads us to consider what we call the Relational Learning Question: \Which standard learning algorithms can be efficiently implemented on relational data, and for those that cannot, is there an alternative algorithm that can be efficiently implemented on relational data and that has similar performance guarantees to the standard algorithm?" In this dissertation, we address the relational learning question for the well-known prob- lems of support vector machine (SVM), logistic regression, and k-means clustering. First, we design an efficient relational algorithm for regularized linear SVM and logistic regression using sampling methods. We show how to implement a variation of gradient descent that provides a nearly optimal approximation guarantee for stable instances. For the k-means problem, we show that the k-means++ algorithm can be efficiently implemented on rela- tional data, and that a slight variation of adaptive k-means algorithm can be efficiently implemented on relational data while maintaining a constant approximation guarantee. On the way to developing these algorithms, we give an efficient approximation algorithm for certain sum-product queries with additive inequalities that commonly arise. iv Table of Contents Preface ........................................... ix 1.0 Introduction .....................................1 1.1 An Illustrative Example . .4 1.2 The Relational Learning Question . .5 1.3 Related Works . .7 1.3.1 Theoretical Frameworks . .7 1.3.2 Relational Machine learning . .9 1.3.3 Query Processing and Distributed Systems . 11 1.3.4 Experimental . 11 1.4 Our Results . 12 1.4.1 Functional Aggregation Queries under Additive Inequality . 12 1.4.2 RML Coresets . 14 1.4.3 Training Linear SVM Using Relational Gradient Descent . 17 1.4.4 Relational K-Means Clustering . 17 2.0 Preliminary ..................................... 19 2.1 Machine Learning . 19 2.1.1 Gradient Descent . 22 2.2 Coresets . 24 2.3 Joins . 26 2.4 Functional Aggregation Queries . 28 2.4.1 FAQ-AI . 32 2.4.2 The Inside-Out Algorithm for Acyclic SumProd Queries . 33 3.0 Functional Aggregation Queries Under Additive Constraint ....... 36 3.1 NP-Hardness of FAQ-AI(1) . 39 3.2 Algorithm for Inequality Row Counting . 39 3.2.1 An Exact Algorithm . 39 v 3.2.2 Applying Sketching . 42 3.2.3 SumSum FAQ-AI(1) . 44 3.3 SumProd FAQ-AI(1) . 45 3.3.1 An Exact Algorithm . 45 3.3.2 Applying Sketching . 49 3.4 Example Applications . 52 3.5 NP-hardness of FAQ-AI(2) Approximation . 54 4.0 Coresets for Regularized Loss Minimization ................. 56 4.1 Upper Bound for Uniform Sampling . 59 4.2 Uniform Sampling Lower Bound . 61 4.3 General Lower Bound on Coreset Size . 64 4.3.1 Logistic Regression . 65 4.3.2 SVM . 68 4.3.3 Proof of Lemma 43 . 69 5.0 Relational Gradient Descent Algorithm For Support Vector Machine Training ....................................... 71 5.1 Hardness of Gradient Approximation . 74 5.2 Algorithm Design . 75 5.2.1 Review of Row Counting with a Single Additive Constraint . 75 5.2.2 Overview of Our Approach . 76 5.2.3 Pseudo-gradient Descent Algorithm . 76 5.3 Algorithm Analysis . 78 5.3.1 Perturbation Analysis . 78 5.3.2 Stability Analysis . 83 6.0 Relational Algorithms for K-Means Clustering ............... 84 6.1 Warm-up: Efficiently Implementing 1-means++ and 2-means++ . 88 6.1.1 Hardness of Relationally Computing the Weights . 90 6.2 The k-means++ Algorithm . 92 6.2.1 Relational Implementation of 3-means++ . 92 6.2.2 General Algorithm . 96 vi 6.2.2.1 Box Construction . 96 6.2.2.2 Sampling . 98 6.3 Weighting the Centers . 102 6.3.1 Uniform Sampling From a Hypersphere . 103 6.4 Analysis of the Weighting Algorithm . 104 6.4.1 Defining the Fractional Weights . 105 6.4.2 Properties of the Fractional Weights . 106 6.4.3 Comparing Alternative Weights to Fractional Weights . 112 6.4.4 Comparing Fractional Weights to Optimal . 112 7.0 Conclusion and Future Work .......................... 117 7.1 Experimental Future Work . 119 7.2 Boosting Decision Trees . 120 7.3 Euclidean K-Centers Problem . 122 7.4 DBSCAN Algorithm . 124 Appendix. Pseudocodes for Section 6.2. ...................... 125 Bibliography ....................................... 127 vii List of Figures 1 A Path Join With Two Tables. .4 2 Logistic Regression Loss Function . 20 3 Hinge Loss Function . 21 4 Boxes Used For Sampling Third Center. 93 viii Preface First, I wish to thank my advisor Kirk Pruhs for his excellent guidance and teaching me how to research. Without his dedication and help, this dissertation was not possible. The work presented in this dissertation is a result of collaboration with my advisor and multiple coauthors, and I am grateful to Mahmoud Abo-Khamis, Ryan Curtin, Sungin Im, Benjamin Moseley, and Yuyan Wang whose collaboration made this research possible. I would also like to show my gratitude to the committee members Panos Chrysanthis and Adriana Kovashka for taking time from their busy schedule to serve on my committee. Finally, I would like to thank my friends and family members for their constant sup- port and motivation, especially my wife Shadi for being the audience of all my practice presentations and proofreading my writings, and above all being my best friend. The research in this dissertation is published in the following papers: • Curtin, Ryan, Sungjin Im, Benjamin Moseley, Kirk Pruhs, Alireza Samadian. "Uncon- ditional Coresets for Regularized Loss Minimization." In International Conference on Artificial Intelligence and Statistics, pp. 482-492. PMLR, 2020 [91]. • Abo-Khamis, Mahmoud, Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Alireza Sama- dian. "Approximate Aggregate Queries Under Additive Inequalities." In Symposium on Algorithmic Principles of Computer Systems (APOCS), pp. 85-99. Society for Industrial and Applied Mathematics, 2021 [4]. • Abo-Khamis, Mahmoud, Sungjin Im, Benjamin Moseley, Kirk Pruhs, and Alireza Sama- dian. "A Relational Gradient Descent Algorithm For Support Vector Machine Training." In Symposium on Algorithmic Principles of Computer Systems (APOCS), pp. 100-113. Society for Industrial and Applied Mathematics, 2021 [5]. • Moseley, Benjamin, Kirk Pruhs, Alireza Samadian, and Yuyan Wang. "Relational Algo- rithms for k-means Clustering." To Appear In International Colloquium on Automata, Languages and Programming, 2021 [82]. ix 1.0 Introduction Kaggle surveys [1] show that the majority of learning tasks faced by data scientists involve relational data. Most commonly, the relational data is stored in tables in a relational database. The data is usually stored in a normalized form to prevent repetition, and it may have both numerical and categorical values. However, most of the machine learning algorithms need the data to be in a single table with all columns present. Furthermore, many machine learning algorithms such as linear regression and linear classifiers need the data to be numerical. To use the traditional machine learning algorithms, the first step is a feature extraction query that consists of joining the tables and converting all columns to numerical values, which can be done with standard methods such as one-hot encoding [31]. Then the design matrix will be imported into a standard learning algorithm to train the model. Thus, conceptually, standard practice transforms a data science query to a query of the following form: Data Science Query = Standard Learning Algorithm(Design Matrix J = T1 on ··· on Tm) where the joins are evaluated first, and the learning algorithm is then applied to the result. Forming the design matrix can increase the size of the join both because of the join itself and the implicit data. As an example, consider the problem of modeling user interests for YouTube videos based on comments and likes. For example, a data scientist might be interested to predict if a user likes a video based on the comments and the profile of the commenter. In this scenario, there is a table for the likes having the information about the person who likes the video and the video ID, and there is another table for the comments. The join of the two tables will have a row for each couple of comments and likes of each video, which means it will be much larger than the original tables. As another example, you may consider a sales department that wants to predict the volume of sales for each item and each branch. In such scenarios, there might be tables for transactions, customers, items, and branches. The machine learning algorithm needs both positive cases (items that are sold in a branch to a customer) and negative cases (items that 1 are not sold). While the department has the information about the sold items, usually they do not store the items that they have not sold in each day.
Recommended publications
  • Computer Science and Decision Theory Fred S. Roberts1 Abstract 1
    Computer Science and Decision Theory Fred S. Roberts1 DIMACS Center, Rutgers University, Piscataway, NJ 08854 USA [email protected] Abstract This paper reviews applications in computer science that decision theorists have addressed for years, discusses the requirements posed by these applications that place great strain on decision theory/social science methods, and explores applications in the social and decision sciences of newer decision-theoretic methods developed with computer science applications in mind. The paper deals with the relation between computer science and decision-theoretic methods of consensus, with the relation between computer science and game theory and decisions, and with \algorithmic decision theory." 1 Introduction Many applications in computer science involve issues and problems that decision theorists have addressed for years, issues of preference, utility, conflict and cooperation, allocation, incentives, consensus, social choice, and measurement. A similar phenomenon is apparent more generally at the interface between computer sci- ence and the social sciences. We have begun to see the use of methods developed by decision theorists/social scientists in a variety of computer science applications. The requirements posed by these computer science applications place great strain on the decision theory/social science methods because of the sheer size of the problems addressed, new contexts in which computational power of agents becomes an issue, limitations on information possessed by players, and the sequential nature of repeated applications. Hence, there is a great need to develop a new generation of methods to satisfy these computer science requirements. In turn, these new methods will provide powerful new tools for social scientists in general and decision theorists in particular.
    [Show full text]
  • Research Notices
    AMERICAN MATHEMATICAL SOCIETY Research in Collegiate Mathematics Education. V Annie Selden, Tennessee Technological University, Cookeville, Ed Dubinsky, Kent State University, OH, Guershon Hare I, University of California San Diego, La jolla, and Fernando Hitt, C/NVESTAV, Mexico, Editors This volume presents state-of-the-art research on understanding, teaching, and learning mathematics at the post-secondary level. The articles are peer-reviewed for two major features: (I) advancing our understanding of collegiate mathematics education, and (2) readability by a wide audience of practicing mathematicians interested in issues affecting their students. This is not a collection of scholarly arcana, but a compilation of useful and informative research regarding how students think about and learn mathematics. This series is published in cooperation with the Mathematical Association of America. CBMS Issues in Mathematics Education, Volume 12; 2003; 206 pages; Softcover; ISBN 0-8218-3302-2; List $49;AII individuals $39; Order code CBMATH/12N044 MATHEMATICS EDUCATION Also of interest .. RESEARCH: AGul<lelbrthe Mathematics Education Research: Hothomatldan- A Guide for the Research Mathematician --lllll'tj.M...,.a.,-- Curtis McKnight, Andy Magid, and -- Teri J. Murphy, University of Oklahoma, Norman, and Michelynn McKnight, Norman, OK 2000; I 06 pages; Softcover; ISBN 0-8218-20 16-8; List $20;AII AMS members $16; Order code MERN044 Teaching Mathematics in Colleges and Universities: Case Studies for Today's Classroom Graduate Student Edition Faculty
    [Show full text]
  • An Efficient Boosting Algorithm for Combining Preferences Raj
    An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1999 © Massachusetts Institute of Technology 1999. All rights reserved. A uth or .............. ..... ..... .................................. Department of E'ectrical ngineering and Computer Science August 24, 1999 C ertified by .. ................. ...... .. .............. David R. Karger Associate Professor Thesis Supervisor Accepted by............... ........ Arthur C. Smith Chairman, Departmental Committe Graduate Students MACHU OF TEC Lo NOV LIBRARIES An Efficient Boosting Algorithm for Combining Preferences by Raj Dharmarajan Iyer Jr. Submitted to the Department of Electrical Engineering and Computer Science on August 24, 1999, in partial fulfillment of the requirements for the degree of Master of Science Abstract The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experi- ment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of Rank- Boost to the individual search strategies. The second experiment is a collaborative-filtering task for making movie recommendations.
    [Show full text]
  • An Introduction to Johnson–Lindenstrauss Transforms
    An Introduction to Johnson–Lindenstrauss Transforms Casper Benjamin Freksen∗ 2nd March 2021 Abstract Johnson–Lindenstrauss Transforms are powerful tools for reducing the dimensionality of data while preserving key characteristics of that data, and they have found use in many fields from machine learning to differential privacy and more. This note explains whatthey are; it gives an overview of their use and their development since they were introduced in the 1980s; and it provides many references should the reader wish to explore these topics more deeply. The text was previously a main part of the introduction of my PhD thesis [Fre20], but it has been adapted to be self contained and serve as a (hopefully good) starting point for readers interested in the topic. 1 The Why, What, and How 1.1 The Problem Consider the following scenario: We have some data that we wish to process but the data is too large, e.g. processing the data takes too much time, or storing the data takes too much space. A solution would be to compress the data such that the valuable parts of the data are kept and the other parts discarded. Of course, what is considered valuable is defined by the data processing we wish to apply to our data. To make our scenario more concrete let us say that our data consists of vectors in a high dimensional Euclidean space, R3, and we wish to find a transform to embed these vectors into a lower dimensional space, R<, where < 3, so that we arXiv:2103.00564v1 [cs.DS] 28 Feb 2021 ≪ can apply our data processing in this lower dimensional space and still get meaningful results.
    [Show full text]
  • Download This PDF File
    T G¨ P 2012 C N Deadline: December 31, 2011 The Gödel Prize for outstanding papers in the area of theoretical computer sci- ence is sponsored jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery, Special Inter- est Group on Algorithms and Computation Theory (ACM-SIGACT). The award is presented annually, with the presentation taking place alternately at the Inter- national Colloquium on Automata, Languages, and Programming (ICALP) and the ACM Symposium on Theory of Computing (STOC). The 20th prize will be awarded at the 39th International Colloquium on Automata, Languages, and Pro- gramming to be held at the University of Warwick, UK, in July 2012. The Prize is named in honor of Kurt Gödel in recognition of his major contribu- tions to mathematical logic and of his interest, discovered in a letter he wrote to John von Neumann shortly before von Neumann’s death, in what has become the famous P versus NP question. The Prize includes an award of USD 5000. AWARD COMMITTEE: The winner of the Prize is selected by a committee of six members. The EATCS President and the SIGACT Chair each appoint three members to the committee, to serve staggered three-year terms. The committee is chaired alternately by representatives of EATCS and SIGACT. The 2012 Award Committee consists of Sanjeev Arora (Princeton University), Josep Díaz (Uni- versitat Politècnica de Catalunya), Giuseppe Italiano (Università a˘ di Roma Tor Vergata), Mogens Nielsen (University of Aarhus), Daniel Spielman (Yale Univer- sity), and Eli Upfal (Brown University). ELIGIBILITY: The rule for the 2011 Prize is given below and supersedes any di fferent interpretation of the parametric rule to be found on websites on both SIGACT and EATCS.
    [Show full text]
  • Hardness of Learning Halfspaces with Noise∗
    Hardness of Learning Halfspaces with Noise∗ Venkatesan Guruswamiy Prasad Raghavendraz Department of Computer Science and Engineering University of Washington Seattle, WA 98195 Abstract Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noise-free case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However, under the promise that a halfspace consistent with a fraction (1 − ") of the examples exists (for some small constant " > 0), it was not known how to efficiently find a halfspace that is correct on even 51% of the examples. Nor was a hardness result that ruled out getting agreement on more than 99:9% of the examples known. In this work, we close this gap in our understanding, and prove that even a tiny amount of worst-case noise makes the problem of learning halfspaces intractable in a strong sense. Specifically, for arbitrary "; δ > 0, we prove that given a set of examples-label pairs from the hypercube a fraction (1 − ") of which can be explained by a halfspace, it is NP-hard to find a halfspace that correctly labels a fraction (1=2 + δ) of the examples. The hardness result is tight since it is trivial to get agreement on 1=2 the examples. In learning theory parlance, we prove that weak proper agnostic learning of halfspaces is hard. This settles a question that was raised by Blum et al. in their work on learning halfspaces in the presence of random classification noise [10], and in some more recent works as well.
    [Show full text]
  • Optimal Bounds for Estimating Entropy with PMF Queries
    Optimal Bounds for Estimating Entropy with PMF Queries Cafer Caferov1, Barı¸sKaya1, Ryan O'Donnell2?, and A. C. Cem Say1 1 Bo˘gazi¸ciUniversity Computer Engineering Department fcafer.caferov,baris.kaya,[email protected] 2 Department of Computer Science, Carnegie Mellon University [email protected] Abstract. Let p be an unknown probability distribution on [n] := f1; 2; : : : ng that we can access via two kinds of queries: A SAMP query takes no input and returns x 2 [n] with probability p[x]; a PMF query takes as input x 2 [n] and returns the value p[x]. We consider the task of estimating the entropy of p to within ±∆ (with high probability). For the usual Shannon entropy H(p), we show that Ω(log2 n=∆2) queries are necessary, matching a recent upper bound of Canonne and Rubinfeld. For 1−1/α the R´enyi entropy Hα(p), where α > 1, we show that Θ n queries are necessary and sufficient. This complements recent work of Acharya et al. in the SAMP-only model that showed O(n1−1/α) queries suffice when α is an integer, but Ωe (n) queries are necessary when α is a non- integer. All of our lower bounds also easily extend to the model where P CDF queries (given x, return y≤x p[y]) are allowed. 1 Introduction The field of statistics is to a large extent concerned with questions of the following sort: How many samples from an unknown probability distribution p are needed in order to accurately estimate various properties of the distribution? These sorts of questions have also been studied more recently within the theoretical computer science framework of property testing.
    [Show full text]
  • Mitigating Bias in Adaptive Data Gathering Via Differential Privacy
    Mitigating Bias in Adaptive Data Gathering via Differential Privacy Seth Neel∗ Aaron Rothy June 7, 2018 Abstract Data that is gathered adaptively | via bandit algorithms, for example | exhibits bias. This is true both when gathering simple numeric valued data | the empirical means kept track of by stochastic bandit algorithms are biased downwards | and when gathering more complicated data | running hypothesis tests on complex data gathered via contextual bandit algorithms leads to false discovery. In this paper, we show that this problem is mitigated if the data collection procedure is differentially private. This lets us both bound the bias of simple numeric valued quantities (like the empirical means of stochastic bandit algorithms), and correct the p-values of hypothesis tests run on the adaptively gathered data. Moreover, there exist differentially private bandit algorithms with near optimal regret bounds: we apply existing theorems in the simple stochastic case, and give a new analysis for linear contextual bandits. We complement our theoretical results with experiments validating our theory. arXiv:1806.02329v1 [cs.LG] 6 Jun 2018 ∗Department of Statistics, The Wharton School, University of Pennsylvania. [email protected]. Supported in part by a 2017 NSF Graduate Research Fellowship. yDepartment of Computer and Information Sciences, University of Pennsylvania. [email protected]. Sup- ported in part by grants from the DARPA Brandeis project, the Sloan Foundation, and NSF grants CNS-1513694 and CNS-1253345. 1 Introduction Many modern data sets consist of data that is gathered adaptively: the choice of whether to collect more data points of a given type depends on the data already collected.
    [Show full text]
  • Arxiv:2008.13374V2
    Active Local Learning Arturs Backurs [email protected] Toyota Technological Institute at Chicago Avrim Blum [email protected] Toyota Technological Institute at Chicago Neha Gupta [email protected] Stanford University Abstract In this work we consider active local learning: given a query point x, and active access to an unlabeled training set S, output the prediction h(x) of a near-optimal h ∈ H using significantly fewer labels than would be needed to actually learn h fully. In particular, the number of label queries should be independent of the complexity of H, and the function h should be well-defined, independent of x. This immediately also implies an algorithm for distance estimation: estimating the value opt(H) from many fewer labels than needed to actually learn a near-optimal h ∈ H, by running local learning on a few random query points and computing the average error. For the hypothesis class consisting of functions supported on the interval [0, 1] with Lipschitz constant bounded by L, we present an algorithm that makes O((1/ǫ6) log(1/ǫ)) label queries from an unlabeled pool of O((L/ǫ4) log(1/ǫ)) samples. It estimates the distance to the best hypothesis in the class to an additive error of ǫ for an arbitrary underlying distribution. We further generalize our algorithm to more than one dimensions. We emphasize that the number of labels used is independent of the complexity of the hypothesis class which is linear in L in the one-dimensional case. Furthermore, we give an algorithm to locally estimate the values of a near-optimal function at a few query points of interest with number of labels independent of L.
    [Show full text]
  • The Boosting Apporach to Machine Learning
    Nonlinear Estimation and Classification, Springer, 2003. The Boosting Approach to Machine Learning An Overview Robert E. Schapire AT&T Labs Research Shannon Laboratory 180 Park Avenue, Room A203 Florham Park, NJ 07932 USA www.research.att.com/ schapire December 19, 2001 Abstract Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm, this chapter overviews some of the recent work on boosting including analyses of AdaBoost’s training error and generalization error; boosting’s connection to game theory and linear programming; the relationship between boosting and logistic regression; extensions of AdaBoost for multiclass classification problems; methods of incorporating human knowledge into boosting; and experimental and applied work using boosting. 1 Introduction Machine learning studies automatic techniques for learning to make accurate pre- dictions based on past observations. For example, suppose that we would like to build an email filter that can distinguish spam (junk) email from non-spam. The machine-learning approach to this problem would be the following: Start by gath- ering as many examples as posible of both spam and non-spam emails. Next, feed these examples, together with labels indicating if they are spam or not, to your favorite machine-learning algorithm which will automatically produce a classifi- cation or prediction rule. Given a new, unlabeled email, such a rule attempts to predict if it is spam or not. The goal, of course, is to generate a rule that makes the most accurate predictions possible on new test examples. 1 Building a highly accurate prediction rule is certainly a difficult task.
    [Show full text]
  • Artists Routinely Provide an Unique
    rsity Comp ve ut ni er U S c n ie w n o c r e B 2 5 4 t 0 h 9- AN 7 NIVERSARY conduit! Volume 13, Number 1 Department of Computer Science Summer, 2004 25th Anniversary issue Brown University technologies, such as virtual reality (VR) or vol- ume rendering. It is difficult for artists to get in- ARTISTIC COLLABORATION volved in design in these visual spaces since, with rare exceptions, one needs to know how to pro- IN DESIGNING gram in order to create within them. Unfortunate- VR VISUALIZATIONS ly, these are also the types of technologies that offer great potential for visualizing many of to- day’s complex datasets [4]. Additionally, they are probably the technologies in which we can most benefit from artistic insight, since guidelines for good visual depiction are far less developed in un- conventional visual spaces, such as virtual reality, than in more traditional 2D media. We begin by describing one of our recent major collaborative efforts, a class on designing virtual Daniel Keefe David Karelitz Eileen Vote David Laidlaw reality scientific visualizations that was co-taught with professors and students from Brown’s com- This article describes some of the lessons we have puter science department and from the Rhode Is- learned from our collaborations with artists on land School of Design (RISD)’s illustration visualization problems. Over the past several department. Many of the experiences and conclu- years, we have worked closely with artists to de- sions relayed here are the results of this class. We velop, refine, and critique visualizations ranging then discuss three important themes that we have from archaeological dig data from the Great Temple of Petra site in Jordan to the fluid dynamics and wing bone shape defor- artists routinely provide an mations that begin to explain how bats, the only flying mammals, stay aloft.
    [Show full text]
  • Adversarially Robust Learning Could Leverage Computational Hardness
    ADVERSARIALLY ROBUST LEARNING COULD LEVERAGE COMPUTATIONAL HARDNESS Adversarially Robust Learning Could Leverage Computational Hardness Sanjam Garg [email protected] UC Berkeley Somesh Jha [email protected] University of Wisconsin Madison Saeed Mahloujifar [email protected] University of Virginia Mohammad Mahmoody [email protected] University of Virginia Abstract Over recent years, devising classification algorithms that are robust to adversarial perturbations has emerged as a challenging problem. In particular, deep neural nets (DNNs) seem to be susceptible to small imperceptible changes over test instances. However, the line of work in provable robustness, so far, has been focused on information theoretic robustness, ruling out even the existence of any adversarial examples. In this work, we study whether there is a hope to benefit from algorithmic nature of an attacker that searches for adversarial examples, and ask whether there is any learning task for which it is possible to design classifiers that are only robust against polynomial-time ad- versaries. Indeed, numerous cryptographic tasks (e.g. encryption of long messages) can only be secure against computationally bounded adversaries, and are indeed impossible for computationally unbounded attackers. Thus, it is natural to ask if the same strategy could help robust learning. We show that computational limitation of attackers can indeed be useful in robust learning by demonstrating the possibility of a classifier for some learning task for which computational and in- formation theoretic adversaries of bounded perturbations have very different power. Namely, while computationally unbounded adversaries can attack successfully and find adversarial examples with small perturbation, polynomial time adversaries are unable to do so unless they can break stan- dard cryptographic hardness assumptions.
    [Show full text]