AI on the Beach the COVID-19 Pandemic Is Not Over and the Future Is Uncertain, but There Has Lately Been a Semblance of What Life Was Like Before
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Recurrent Neural Networks
Sequence Modeling: Recurrent and Recursive Nets Lecture slides for Chapter 10 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-27 Adapted by m.n. for CMPS 392 RNN • RNNs are a family of neural networks for processing sequential data • Recurrent networks can scale to much longer sequences than would be practical for networks without sequence-based specialization. • Most recurrent networks can also process sequences of variable length. • Based on parameter sharing q If we had separate parameters for each value of the time index, o we could not generalize to sequence lengths not seen during training, o nor share statistical strength across different sequence lengths, o and across different positions in time. (Goodfellow 2016) Example • Consider the two sentences “I went to Nepal in 2009” and “In 2009, I went to Nepal.” • How a machine learning can extract the year information? q A traditional fully connected feedforward network would have separate parameters for each input feature, q so it would need to learn all of the rules of the language separately at each position in the sentence. • By comparison, a recurrent neural network shares the same weights across several time steps. (Goodfellow 2016) RNN vs. 1D convolutional • The output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. • Recurrent networks share parameters in a different way. q Each member of the output is a function of the previous members of the output q Each member of the output is produced using the same update rule applied to the previous outputs. -
Ian Goodfellow, Staff Research Scientist, Google Brain CVPR
MedGAN ID-CGAN CoGAN LR-GAN CGAN IcGAN b-GAN LS-GAN LAPGAN DiscoGANMPM-GAN AdaGAN AMGAN iGAN InfoGAN CatGAN IAN LSGAN Introduction to GANs SAGAN McGAN Ian Goodfellow, Staff Research Scientist, Google Brain MIX+GAN MGAN CVPR Tutorial on GANs BS-GAN FF-GAN Salt Lake City, 2018-06-22 GoGAN C-VAE-GAN C-RNN-GAN DR-GAN DCGAN MAGAN 3D-GAN CCGAN AC-GAN BiGAN GAWWN DualGAN CycleGAN Bayesian GAN GP-GAN AnoGAN EBGAN DTN Context-RNN-GAN MAD-GAN ALI f-GAN BEGAN AL-CGAN MARTA-GAN ArtGAN MalGAN Generative Modeling: Density Estimation Training Data Density Function (Goodfellow 2018) Generative Modeling: Sample Generation Training Data Sample Generator (CelebA) (Karras et al, 2017) (Goodfellow 2018) Adversarial Nets Framework D tries to make D(G(z)) near 0, D(x) tries to be G tries to make near 1 D(G(z)) near 1 Differentiable D function D x sampled from x sampled from data model Differentiable function G Input noise z (Goodfellow et al., 2014) (Goodfellow 2018) Self-Play 1959: Arthur Samuel’s checkers agent (OpenAI, 2017) (Silver et al, 2017) (Bansal et al, 2017) (Goodfellow 2018) 3.5 Years of Progress on Faces 2014 2015 2016 2017 (Brundage et al, 2018) (Goodfellow 2018) p.15 General Framework for AI & Security Threats Published as a conference paper at ICLR 2018 <2 Years of Progress on ImageNet Odena et al 2016 monarch butterfly goldfinch daisy redshank grey whale Miyato et al 2017 monarch butterfly goldfinch daisy redshank grey whale Zhang et al 2018 monarch butterfly goldfinch (Goodfellow 2018) daisy redshank grey whale Figure 7: 128x128 pixel images generated by SN-GANs trained on ILSVRC2012 dataset. -
How Babies Think
osq216Gpnk3p.indd 4 3/9/16 5:37 PM RAISE GREAT Thirty years ago most psychologists, KIDS philosophers and psychiatrists thought that babies and young children were ir- rational, egocentric and amoral. They believed children were locked in the con- crete here and now—unable to under- Even the stand cause and effect, imagine the ex - periences of other people, or appreciate youngest the difference between reality and fanta- sy. People still often think of children as children know, defective adults. experience and But in the past three decades scien- tists have discovered that even the young- learn far more est children know more than we would ever have thought possible. Moreover, than scientists studies suggest that children learn about the world in much the same way that sci- Photographs by ever thought entists do—by conducting experiments, Timothy analyzing statistics, and forming intuitive Archibald possible theories of the physical, biological and psychological realms. Since about 2000, researchers have started to understand the underlying computational, evolutionary and neurological mechanisms that under- pin these remarkable early abilities. These revolutionary ndings not only change our ideas about babies, they give us a fresh perspective on human nature itself. PHYSICS FOR BABIES Why were we so wrong about babies for so long? If you look cursorily at children who are four years old and younger (the age range I will discuss in this article), you might indeed conclude that not much is going on. Babies, after all, cannot talk. And even preschoolers are not good at re- porting what they think. Ask your aver- age three-year-old an open-ended ques- tion, and you are likely to get a beautiful but incomprehensible stream-of-con- sciousness monologue. -
Children's Causal Inferences from Indirect Evidence
Running head: CHILDREN’S CAUSAL INFERENCES Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers David M. Sobel Department of Cognitive and Linguistic Sciences, Brown University Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, MIT Alison Gopnik Department of Psychology, University of California at Berkeley Cognitive Science in press. Children’s causal 2 Abstract Previous research suggests that children can infer causal relations from patterns of events. However, what appear to be cases of causal inference may simply reduce to children recognizing relevant associations among events, and responding based on those associations. To examine this claim, in Experiments 1 and 2, children were introduced to a “blicket detector”, a machine that lit up and played music when certain objects were placed upon it. Children observed patterns of contingency between objects and the machine’s activation that required them to use indirect evidence to make causal inferences. Critically, associative models either made no predictions, or made incorrect predictions about these inferences. In general, children were able to make these inferences, but some developmental differences between 3- and 4- year-olds were found. We suggest that children’s causal inferences are not based on recognizing associations, but rather that children develop a mechanism for Bayesian structure learning. Experiment 3 explicitly tests a prediction of this account. Children were asked to make an inference about ambiguous data based on the base-rate of certain events occurring. Four- year-olds, but not 3-year-olds were able to make this inference. Children’s causal 3 Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers As adults, we know a remarkable amount about the causal structure of the world. -
Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks
Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks Rajkumar Theagarajan and Bir Bhanu Center for Research in Intelligent Systems, University of California, Riverside, CA 92521 [email protected], [email protected] Abstract attacker has full knowledge about the classification model’s parameters and architecture, whereas in the Black box set- Defending adversarial attacks is a critical step towards ting [50] the attacker does not have this knowledge. In this reliable deployment of deep learning empowered solutions paper we focus on the Black box based adversarial attacks. for biometrics verification. Current approaches for de- Current defenses against adversarial attacks can be clas- fending Black box models use the classification accuracy sified into four approaches: 1) modifying the training data, of the Black box as a performance metric for validating 2) modifying the model, 3) using auxiliary tools, and their defense. However, classification accuracy by itself is 4) detecting and rejecting adversarial examples. Modi- not a reliable metric to determine if the resulting image is fying the training data involves augmenting the training “adversarial-free”. This is a serious problem for online dataset with adversarial examples and re-training the classi- biometrics verification applications where the ground-truth fier [22, 30, 64, 73] or performing N number of pre-selected of the incoming image is not known and hence we cannot image transformations in a random order [13, 17, 26, 52]. compute the accuracy of the classifier or know if the image Modifying the model involves pruning the architecture of is ”adversarial-free” or not. This paper proposes a novel the classifier [38, 51, 69] or adding pre/post-processing lay- framework for defending Black box systems from adversar- ers to it [9, 12, 71]. -
DOI: 10.1126/Science.1223416 , 1623 (2012); 337 Science Alison Gopnik
Scientific Thinking in Young Children: Theoretical Advances, Empirical Research, and Policy Implications Alison Gopnik Science 337, 1623 (2012); DOI: 10.1126/science.1223416 This copy is for your personal, non-commercial use only. If you wish to distribute this article to others, you can order high-quality copies for your colleagues, clients, or customers by clicking here. Permission to republish or repurpose articles or portions of articles can be obtained by following the guidelines here. The following resources related to this article are available online at www.sciencemag.org (this information is current as of October 13, 2012 ): Updated information and services, including high-resolution figures, can be found in the online version of this article at: on October 13, 2012 http://www.sciencemag.org/content/337/6102/1623.full.html Supporting Online Material can be found at: http://www.sciencemag.org/content/suppl/2012/09/26/337.6102.1623.DC1.html A list of selected additional articles on the Science Web sites related to this article can be found at: http://www.sciencemag.org/content/337/6102/1623.full.html#related This article cites 29 articles, 8 of which can be accessed free: www.sciencemag.org http://www.sciencemag.org/content/337/6102/1623.full.html#ref-list-1 This article appears in the following subject collections: Psychology http://www.sciencemag.org/cgi/collection/psychology Downloaded from Science (print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by the American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005. -
The Creation and Detection of Deepfakes: a Survey
1 The Creation and Detection of Deepfakes: A Survey YISROEL MIRSKY∗, Georgia Institute of Technology and Ben-Gurion University WENKE LEE, Georgia Institute of Technology Generative deep learning algorithms have progressed to a point where it is dicult to tell the dierence between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these ‘deepfakes’ have advanced signicantly. In this paper, we explore the creation and detection of deepfakes an provide an in-depth view how these architectures work. e purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas which require further research and aention. CCS Concepts: •Security and privacy ! Social engineering attacks; Human and societal aspects of security and privacy; •Computing methodologies ! Machine learning; Additional Key Words and Phrases: Deepfake, Deep fake, reenactment, replacement, face swap, generative AI, social engineering, impersonation ACM Reference format: Yisroel Mirsky and Wenke Lee. 2020. e Creation and Detection of Deepfakes: A Survey. ACM Comput. Surv. 1, 1, Article 1 (January 2020), 38 pages. DOI: XX.XXXX/XXXXXXX.XXXXXXX 1 INTRODUCTION A deepfake is content, generated by an articial intelligence, that is authentic in the eyes of a human being. e word deepfake is a combination of the words ‘deep learning’ and ‘fake’ and primarily relates to content generated by an articial neural network, a branch of machine learning. -
Ian Goodfellow Deep Learning Pdf
Ian goodfellow deep learning pdf Continue Ian Goodfellow Born1985/1986 (age 34-35)NationalityAmericanAlma materStanford UniversityUniversit de Montr'aKnown for generative competitive networks, Competitive examplesSpopapocomputer scienceInstitutionApple. Google BrainOpenAIThesisDeep Learning Views and Its Application to Computer Vision (2014)Doctoral Adviser Yashua BengioAron Kurville Websitewww.iangoodfellow.com Jan J. Goodfellow (born 1985 or 1986) is a machine learning researcher who currently works at Apple Inc. as director of machine learning in a special project group. Previously, he worked as a researcher at Google Brain. He has made a number of contributions to the field of deep learning. Biography Of Goodfellow received a bachelor's and doctorate in computer science from Stanford University under the direction of Andrew Ng and a PhD in Machine Learning from the University of Montreal in April 2014 under the direction of Joshua Bengio and Aaron Kurwill. His dissertation is called Deep Study of Representations and its application to computer vision. After graduating from university, Goodfellow joined Google as part of the Google Brain research team. He then left Google to join the newly founded OpenAI Institute. He returned to Google Research in March 2017. Goodfellow is best known for inventing generative adversarial networks. He is also the lead author of the Deep Learning textbook. At Google, he developed a system that allows Google Maps to automatically transcribe addresses from photos taken by Street View cars, and demonstrated the vulnerabilities of machine learning systems. In 2017, Goodfellow was mentioned in 35 innovators under the age of 35 at MIT Technology Review. In 2019, it was included in the list of 100 global thinkers Foreign Policy. -
Semi-Supervised Neural Architecture Search
Semi-Supervised Neural Architecture Search 1Renqian Luo,∗2Xu Tan, 2Rui Wang, 2Tao Qin, 1Enhong Chen, 2Tie-Yan Liu 1University of Science and Technology of China, Hefei, China 2Microsoft Research Asia, Beijing, China [email protected], [email protected] 2{xuta, ruiwa, taoqin, tyliu}@microsoft.com Abstract Neural architecture search (NAS) relies on a good controller to generate better architectures or predict the accuracy of given architectures. However, training the controller requires both abundant and high-quality pairs of architectures and their accuracy, while it is costly to evaluate an architecture and obtain its accuracy. In this paper, we propose SemiNAS, a semi-supervised NAS approach that leverages numerous unlabeled architectures (without evaluation and thus nearly no cost). Specifically, SemiNAS 1) trains an initial accuracy predictor with a small set of architecture-accuracy data pairs; 2) uses the trained accuracy predictor to predict the accuracy of large amount of architectures (without evaluation); and 3) adds the generated data pairs to the original data to further improve the predictor. The trained accuracy predictor can be applied to various NAS algorithms by predicting the accuracy of candidate architectures for them. SemiNAS has two advantages: 1) It reduces the computational cost under the same accuracy guarantee. On NASBench-101 benchmark dataset, it achieves comparable accuracy with gradient- based method while using only 1/7 architecture-accuracy pairs. 2) It achieves higher accuracy under the same computational cost. It achieves 94.02% test accuracy on NASBench-101, outperforming all the baselines when using the same number of architectures. On ImageNet, it achieves 23.5% top-1 error rate (under 600M FLOPS constraint) using 4 GPU-days for search. -
Amora: Black-Box Adversarial Morphing Attack
Amora: Black-box Adversarial Morphing Attack Run Wang1, Felix Juefei-Xu2, Qing Guo1,y, Yihao Huang3, Xiaofei Xie1, Lei Ma4, Yang Liu1,5 1Nanyang Technological University, Singapore 2Alibaba Group, USA 3East China Normal University, China 4Kyushu University, Japan 5Institute of Computing Innovation, Zhejiang University, China ABSTRACT Nowadays, digital facial content manipulation has become ubiq- uitous and realistic with the success of generative adversarial net- works (GANs), making face recognition (FR) systems suffer from unprecedented security concerns. In this paper, we investigate and introduce a new type of adversarial attack to evade FR systems by (a) Presentation spoofing attacks manipulating facial content, called adversarial morphing attack (a.k.a. Amora). In contrast to adversarial noise attack that perturbs pixel intensity values by adding human-imperceptible noise, our proposed adversarial morphing attack works at the semantic level that perturbs pixels spatially in a coherent manner. To tackle the (b) Adversarial noise attack black-box attack problem, we devise a simple yet effective joint dictionary learning pipeline to obtain a proprietary optical flow field for each attack. Our extensive evaluation on two popular FR systems demonstrates the effectiveness of our adversarial morphing attack at various levels of morphing intensity with smiling facial ex- pression manipulations. Both open-set and closed-set experimental (c) Adversarial morphing attack results indicate that a novel black-box adversarial attack based on local deformation is possible, and is vastly different from additive Figure 1: Three typical attacks on the FR systems. (a) presentation spoofing at- noise attacks. The findings of this work potentially pave anew tacks (e.g., print attack [14], disguise attack [50], mask attack [36], and replay research direction towards a more thorough understanding and attack [8]), (b) adversarial noise attack [14, 18, 42], (c) proposed Amora. -
Philosophy News
2015-2016 PHILOSOPHY NEWS INSIDE Wayne Sumner’s Roseman Lecture 2015 In Memoriam: Brilliant (But Brief) Global Justice: Francis Sparshott Legal Career From Theory to Practice 1926-2015 UNIVERSITY OF TORONTO DEPARTMENT OF PHILOSOPHY 2 Philosophy News CONTENTS Welcome & Reports 3 Brad Inwood 8 Wayne Sumner’s Legal Career 10 Roseman Lecture 2015 12 Ergo 14 Fackenheim Portrait 15 In Memoriam Francis Sparshott 16 People & Awards 18 Book Launch 2015 23 PHILOSOPHY NEWS www.philosophy.utoronto.ca We wish to thank the Editors: Anita Di Giacomo Department of Philosophy generous donors to the Mary Frances Ellison University of Toronto Department of Philosophy, Layout: Dragon Snap Design 170 St. George Street, 4th Floor without whom Philosophy Toronto ON M5R 2M8 News would not be possible. Canada Philosophy News Tel: 416-978-3311 Please support the Department 2015-2016 edition Fax: 416-978-8703 in our endeavours! Spring, 2016 YOUR PRIVACY: The University of Toronto respects your privacy. We do not rent, trade or sell our mailing lists. The information on this form is collected and used for the administration of the University’s advancement activities undertaken pursuant to the University of Toronto Act, 1971. If you have any questions, please refer to <www.utoronto.ca/privacy> or contact the University’s Freedom of Information and Protection of Privacy Coordinator at 416-946-7303, McMurrich Building, Room 201, 12 Queen’s Park Crescent West, Toronto, ON M5S 1A8. If you do not wish to receive future newsletters from the Department of Philosophy, please contact us at 416-978-2139 or at [email protected] University of Toronto 3 Welcome his year, year in the department, Tom served as undergradu- Philosophy ate coordinator and tri-campus TA coordinator, and we T News, the are thankful for his excellent services in these two jobs. -
Current Directions in Psychologi
CDPXXX10.1177/0963721414556653Gopnik et al.Flexibility and Open-Mindedness in Younger Learners 556653research-article2015 Current Directions in Psychological Science When Younger Learners Can Be Better 2015, Vol. 24(2) 87–92 © The Author(s) 2015 Reprints and permissions: (or at Least More Open-Minded) sagepub.com/journalsPermissions.nav DOI: 10.1177/0963721414556653 Than Older Ones cdps.sagepub.com Alison Gopnik1, Thomas L. Griffiths1, and Christopher G. Lucas2 1Department of Psychology, University of California, Berkeley, and 2School of Informatics, University of Edinburgh, United Kingdom Abstract We describe a surprising developmental pattern we found in studies involving three different kinds of problems and age ranges. Younger learners are better than older ones at learning unusual abstract causal principles from evidence. We explore two factors that might contribute to this counterintuitive result. The first is that as our knowledge grows, we become less open to new ideas. The second is that younger minds and brains are intrinsically more flexible and exploratory, although they are also less efficient as a result. Keywords cognitive development, causal learning, Bayesian models, simulated annealing There is a tension in the field of cognitive development. also suggest that younger learners might sometimes be Children perform worse than adults on many measures. open to more possibilities than older ones. As they grow older, children become more focused, they Theoretically, we propose two possible complemen- plan better, and, of course, they know more. Yet very tary explanations for this pattern, inspired by viewing young children are prodigious learners, and they are children’s learning through the lens of computer science. especially good at learning about causes.