AI on the Beach the COVID-19 Pandemic Is Not Over and the Future Is Uncertain, but There Has Lately Been a Semblance of What Life Was Like Before

Total Page:16

File Type:pdf, Size:1020Kb

AI on the Beach the COVID-19 Pandemic Is Not Over and the Future Is Uncertain, but There Has Lately Been a Semblance of What Life Was Like Before editorial AI on the beach The COVID-19 pandemic is not over and the future is uncertain, but there has lately been a semblance of what life was like before. As thoughts turn to the possibility of a summer holiday, we ofer suggestions for books and podcasts on AI to refresh the mind. uring the pandemic, social media at Berkeley) addresses ‘child-inspired AI’, and AI technology have only grown that is, the role of studying how children Din importance. An excellent book learn in building AI systems, emphasizing that takes a look at the rise of the companies imitation, abstract causal models, and active behind these technologies is Genius Makers learning via exploration. Matthew Botvinick by Cade Metz1. Metz, a New York Times (DeepMind) describes the interrelations journalist, interviewed key players in the between neuroscience and psychology, development of modern machine learning including how much we understand about and its deployment by tech companies. With the brain, and how that knowledge might an insider’s sources and a journalist’s knack relate to the design of AI. for storytelling, he describes the people, For another big-picture take on machine ideas and history of “The Mavericks Who intelligence, listen to Melanie Mitchell Brought AI to Google, Facebook, and the (Sante Fe Institute) who thinks about the World” (subtitle). We recommend the book interactions between complexity and AI. for readers who want to understand the Pamela McCorduck, author of Machines main ideas behind the latest AI technologies, Who Think (first published in 1979; second the personalities of prominent people, and edition in 20044) tells fascinating stories important events in the field. Credit: Anna Berkut/Alamy Stock Photo. about the early and middle decades of AI. For a deeper dive into AI topics, and a On the topic of robotics, Anca Dragan more panoramic view of the field, the reader (University of California at Berkeley) may consider the fourth edition of Stuart the most of personality, dialogue and engages in a lively discussion about Russell and Peter Norvig’s celebrated text, improvisation, and are led by a personable human–robot interaction, including her Artificial Intelligence: A Modern Approach2. host who serves as an informed interviewer. favourite robot (and why), self-driving cars, A warning: the hardback is a hefty 1,115 The podcast phenomenon is part of the and reward functions in robotics. Missy pages, perhaps useful for anchoring a beach democratization of the Internet, in which Cummings (Duke University) describes her tent; otherwise, the Kindle or electronic anyone can have a platform. It can be hard experience as a US Air Force fighter pilot, versions may be preferred. About 25% of to find quality material among the deluge asks whether the US military should use AI the material in the latest edition is new, of information, but good programmes exist, weapons, and advocates for robots to be built and the remaining 75% has been rewritten such as Sam Charrington’s well-curated with human-centric safety controls. Ayanna or presented in a new format. There This Week in Machine Learning & AI; Paul Howard (Ohio State University) discusses is expanded coverage of areas such as Middlebrooks’ Brain Inspired; and the socialization of robots and the attribution of machine learning, deep learning, robotics, Lex Fridman Podcast, formerly called the genders to AI and robotic systems. natural language processing, causality, Artificial Intelligence Podcast. Of note, some And a recent This Machine Kills episode probabilistic programming, and the impact researchers who were interviewees have offers an engaging conversation between of AI on society. Several chapters include started their own programmes, such as Salomé Viljoen (NYU) and the hosts about contributing writers such as Judea Pearl Pieter Abeel’s The Robot Brains Podcast. An data governance and how to make data (Causal Networks), Ian Goodfellow (Deep alternative podcast that takes a (very) critical collection work for socially beneficial uses. Learning) and Anca Dragan (Robotics). look at how AI and big tech are ruling These books and podcasts, most of which The book includes historical notes, as well society and the economy, with a surprisingly appeared during the pandemic, remind as online resources such as exercises and light touch, is This Machine Kills hosted by us of the importance of stories, ideas and implementations of algorithms. Jathan Sadowski and Edward Ongweso Jr. people in our lives. We hope you enjoy our For a creative as well as informative take Some of the episodes from the past suggestions and have an opportunity for on machine learning, You Look Like a Thing year or two that we found particularly some refreshing reading and listening this and I Love You by Janelle Shane3 would be interesting include the following. For summer, on the beach or at home. ❐ a good choice for a beach read. The writer big-picture ideas on how to think about AI, explores, with hands-on experiments, the neuroscience and cognitive science, listen Published online: 20 July 2021 weird possibilities of generative AI systems, to Paul Cisek (University of Montreal) in https://doi.org/10.1038/s42256-021-00375-2 and provides an accessible introduction to two episodes: in part 1, he emphasizes the References machine learning but also a cautionary tale. importance of evolution in understanding 1. Metz, C. Genius Makers: Te Mavericks Who Brought AI to In books, the reader and author work the brain and cognition, especially the Google, Facebook, and the World (Dutton, 2021). together, one-to-one. Podcasts invite the roles of the environment and actions; 2. Russell, S. & Norvig, P. Artifcial Intelligence: A Modern Approach 4th edn (Pearson, 2020). listener or viewer to observe or listen and in part 2, he criticizes the ‘new AI’. 3. Shane, J. You Look Like a Ting and I Love You (Wildfre, 2019). to conversations. Good podcasts make Alison Gopnik (University of California 4. McCorduck, P. Machines Who Tink 2nd edn (Routledge, 2004). NATURE MACHINE INTELLIGENCE | VOL 3 | JULY 2021 | 555 | www.nature.com/natmachintell 555.
Recommended publications
  • Recurrent Neural Networks
    Sequence Modeling: Recurrent and Recursive Nets Lecture slides for Chapter 10 of Deep Learning www.deeplearningbook.org Ian Goodfellow 2016-09-27 Adapted by m.n. for CMPS 392 RNN • RNNs are a family of neural networks for processing sequential data • Recurrent networks can scale to much longer sequences than would be practical for networks without sequence-based specialization. • Most recurrent networks can also process sequences of variable length. • Based on parameter sharing q If we had separate parameters for each value of the time index, o we could not generalize to sequence lengths not seen during training, o nor share statistical strength across different sequence lengths, o and across different positions in time. (Goodfellow 2016) Example • Consider the two sentences “I went to Nepal in 2009” and “In 2009, I went to Nepal.” • How a machine learning can extract the year information? q A traditional fully connected feedforward network would have separate parameters for each input feature, q so it would need to learn all of the rules of the language separately at each position in the sentence. • By comparison, a recurrent neural network shares the same weights across several time steps. (Goodfellow 2016) RNN vs. 1D convolutional • The output of convolution is a sequence where each member of the output is a function of a small number of neighboring members of the input. • Recurrent networks share parameters in a different way. q Each member of the output is a function of the previous members of the output q Each member of the output is produced using the same update rule applied to the previous outputs.
    [Show full text]
  • Ian Goodfellow, Staff Research Scientist, Google Brain CVPR
    MedGAN ID-CGAN CoGAN LR-GAN CGAN IcGAN b-GAN LS-GAN LAPGAN DiscoGANMPM-GAN AdaGAN AMGAN iGAN InfoGAN CatGAN IAN LSGAN Introduction to GANs SAGAN McGAN Ian Goodfellow, Staff Research Scientist, Google Brain MIX+GAN MGAN CVPR Tutorial on GANs BS-GAN FF-GAN Salt Lake City, 2018-06-22 GoGAN C-VAE-GAN C-RNN-GAN DR-GAN DCGAN MAGAN 3D-GAN CCGAN AC-GAN BiGAN GAWWN DualGAN CycleGAN Bayesian GAN GP-GAN AnoGAN EBGAN DTN Context-RNN-GAN MAD-GAN ALI f-GAN BEGAN AL-CGAN MARTA-GAN ArtGAN MalGAN Generative Modeling: Density Estimation Training Data Density Function (Goodfellow 2018) Generative Modeling: Sample Generation Training Data Sample Generator (CelebA) (Karras et al, 2017) (Goodfellow 2018) Adversarial Nets Framework D tries to make D(G(z)) near 0, D(x) tries to be G tries to make near 1 D(G(z)) near 1 Differentiable D function D x sampled from x sampled from data model Differentiable function G Input noise z (Goodfellow et al., 2014) (Goodfellow 2018) Self-Play 1959: Arthur Samuel’s checkers agent (OpenAI, 2017) (Silver et al, 2017) (Bansal et al, 2017) (Goodfellow 2018) 3.5 Years of Progress on Faces 2014 2015 2016 2017 (Brundage et al, 2018) (Goodfellow 2018) p.15 General Framework for AI & Security Threats Published as a conference paper at ICLR 2018 <2 Years of Progress on ImageNet Odena et al 2016 monarch butterfly goldfinch daisy redshank grey whale Miyato et al 2017 monarch butterfly goldfinch daisy redshank grey whale Zhang et al 2018 monarch butterfly goldfinch (Goodfellow 2018) daisy redshank grey whale Figure 7: 128x128 pixel images generated by SN-GANs trained on ILSVRC2012 dataset.
    [Show full text]
  • How Babies Think
    osq216Gpnk3p.indd 4 3/9/16 5:37 PM RAISE GREAT Thirty years ago most psychologists, KIDS philosophers and psychiatrists thought that babies and young children were ir- rational, egocentric and amoral. They believed children were locked in the con- crete here and now—unable to under- Even the stand cause and effect, imagine the ex - periences of other people, or appreciate youngest the difference between reality and fanta- sy. People still often think of children as children know, defective adults. experience and But in the past three decades scien- tists have discovered that even the young- learn far more est children know more than we would ever have thought possible. Moreover, than scientists studies suggest that children learn about the world in much the same way that sci- Photographs by ever thought entists do—by conducting experiments, Timothy analyzing statistics, and forming intuitive Archibald possible theories of the physical, biological and psychological realms. Since about 2000, researchers have started to understand the underlying computational, evolutionary and neurological mechanisms that under- pin these remarkable early abilities. These revolutionary ndings not only change our ideas about babies, they give us a fresh perspective on human nature itself. PHYSICS FOR BABIES Why were we so wrong about babies for so long? If you look cursorily at children who are four years old and younger (the age range I will discuss in this article), you might indeed conclude that not much is going on. Babies, after all, cannot talk. And even preschoolers are not good at re- porting what they think. Ask your aver- age three-year-old an open-ended ques- tion, and you are likely to get a beautiful but incomprehensible stream-of-con- sciousness monologue.
    [Show full text]
  • Children's Causal Inferences from Indirect Evidence
    Running head: CHILDREN’S CAUSAL INFERENCES Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers David M. Sobel Department of Cognitive and Linguistic Sciences, Brown University Joshua B. Tenenbaum Department of Brain and Cognitive Sciences, MIT Alison Gopnik Department of Psychology, University of California at Berkeley Cognitive Science in press. Children’s causal 2 Abstract Previous research suggests that children can infer causal relations from patterns of events. However, what appear to be cases of causal inference may simply reduce to children recognizing relevant associations among events, and responding based on those associations. To examine this claim, in Experiments 1 and 2, children were introduced to a “blicket detector”, a machine that lit up and played music when certain objects were placed upon it. Children observed patterns of contingency between objects and the machine’s activation that required them to use indirect evidence to make causal inferences. Critically, associative models either made no predictions, or made incorrect predictions about these inferences. In general, children were able to make these inferences, but some developmental differences between 3- and 4- year-olds were found. We suggest that children’s causal inferences are not based on recognizing associations, but rather that children develop a mechanism for Bayesian structure learning. Experiment 3 explicitly tests a prediction of this account. Children were asked to make an inference about ambiguous data based on the base-rate of certain events occurring. Four- year-olds, but not 3-year-olds were able to make this inference. Children’s causal 3 Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers As adults, we know a remarkable amount about the causal structure of the world.
    [Show full text]
  • Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks
    Defending Black Box Facial Recognition Classifiers Against Adversarial Attacks Rajkumar Theagarajan and Bir Bhanu Center for Research in Intelligent Systems, University of California, Riverside, CA 92521 [email protected], [email protected] Abstract attacker has full knowledge about the classification model’s parameters and architecture, whereas in the Black box set- Defending adversarial attacks is a critical step towards ting [50] the attacker does not have this knowledge. In this reliable deployment of deep learning empowered solutions paper we focus on the Black box based adversarial attacks. for biometrics verification. Current approaches for de- Current defenses against adversarial attacks can be clas- fending Black box models use the classification accuracy sified into four approaches: 1) modifying the training data, of the Black box as a performance metric for validating 2) modifying the model, 3) using auxiliary tools, and their defense. However, classification accuracy by itself is 4) detecting and rejecting adversarial examples. Modi- not a reliable metric to determine if the resulting image is fying the training data involves augmenting the training “adversarial-free”. This is a serious problem for online dataset with adversarial examples and re-training the classi- biometrics verification applications where the ground-truth fier [22, 30, 64, 73] or performing N number of pre-selected of the incoming image is not known and hence we cannot image transformations in a random order [13, 17, 26, 52]. compute the accuracy of the classifier or know if the image Modifying the model involves pruning the architecture of is ”adversarial-free” or not. This paper proposes a novel the classifier [38, 51, 69] or adding pre/post-processing lay- framework for defending Black box systems from adversar- ers to it [9, 12, 71].
    [Show full text]
  • DOI: 10.1126/Science.1223416 , 1623 (2012); 337 Science Alison Gopnik
    Scientific Thinking in Young Children: Theoretical Advances, Empirical Research, and Policy Implications Alison Gopnik Science 337, 1623 (2012); DOI: 10.1126/science.1223416 This copy is for your personal, non-commercial use only. If you wish to distribute this article to others, you can order high-quality copies for your colleagues, clients, or customers by clicking here. Permission to republish or repurpose articles or portions of articles can be obtained by following the guidelines here. The following resources related to this article are available online at www.sciencemag.org (this information is current as of October 13, 2012 ): Updated information and services, including high-resolution figures, can be found in the online version of this article at: on October 13, 2012 http://www.sciencemag.org/content/337/6102/1623.full.html Supporting Online Material can be found at: http://www.sciencemag.org/content/suppl/2012/09/26/337.6102.1623.DC1.html A list of selected additional articles on the Science Web sites related to this article can be found at: http://www.sciencemag.org/content/337/6102/1623.full.html#related This article cites 29 articles, 8 of which can be accessed free: www.sciencemag.org http://www.sciencemag.org/content/337/6102/1623.full.html#ref-list-1 This article appears in the following subject collections: Psychology http://www.sciencemag.org/cgi/collection/psychology Downloaded from Science (print ISSN 0036-8075; online ISSN 1095-9203) is published weekly, except the last week in December, by the American Association for the Advancement of Science, 1200 New York Avenue NW, Washington, DC 20005.
    [Show full text]
  • The Creation and Detection of Deepfakes: a Survey
    1 The Creation and Detection of Deepfakes: A Survey YISROEL MIRSKY∗, Georgia Institute of Technology and Ben-Gurion University WENKE LEE, Georgia Institute of Technology Generative deep learning algorithms have progressed to a point where it is dicult to tell the dierence between what is real and what is fake. In 2018, it was discovered how easy it is to use this technology for unethical and malicious applications, such as the spread of misinformation, impersonation of political leaders, and the defamation of innocent individuals. Since then, these ‘deepfakes’ have advanced signicantly. In this paper, we explore the creation and detection of deepfakes an provide an in-depth view how these architectures work. e purpose of this survey is to provide the reader with a deeper understanding of (1) how deepfakes are created and detected, (2) the current trends and advancements in this domain, (3) the shortcomings of the current defense solutions, and (4) the areas which require further research and aention. CCS Concepts: •Security and privacy ! Social engineering attacks; Human and societal aspects of security and privacy; •Computing methodologies ! Machine learning; Additional Key Words and Phrases: Deepfake, Deep fake, reenactment, replacement, face swap, generative AI, social engineering, impersonation ACM Reference format: Yisroel Mirsky and Wenke Lee. 2020. e Creation and Detection of Deepfakes: A Survey. ACM Comput. Surv. 1, 1, Article 1 (January 2020), 38 pages. DOI: XX.XXXX/XXXXXXX.XXXXXXX 1 INTRODUCTION A deepfake is content, generated by an articial intelligence, that is authentic in the eyes of a human being. e word deepfake is a combination of the words ‘deep learning’ and ‘fake’ and primarily relates to content generated by an articial neural network, a branch of machine learning.
    [Show full text]
  • Ian Goodfellow Deep Learning Pdf
    Ian goodfellow deep learning pdf Continue Ian Goodfellow Born1985/1986 (age 34-35)NationalityAmericanAlma materStanford UniversityUniversit de Montr'aKnown for generative competitive networks, Competitive examplesSpopapocomputer scienceInstitutionApple. Google BrainOpenAIThesisDeep Learning Views and Its Application to Computer Vision (2014)Doctoral Adviser Yashua BengioAron Kurville Websitewww.iangoodfellow.com Jan J. Goodfellow (born 1985 or 1986) is a machine learning researcher who currently works at Apple Inc. as director of machine learning in a special project group. Previously, he worked as a researcher at Google Brain. He has made a number of contributions to the field of deep learning. Biography Of Goodfellow received a bachelor's and doctorate in computer science from Stanford University under the direction of Andrew Ng and a PhD in Machine Learning from the University of Montreal in April 2014 under the direction of Joshua Bengio and Aaron Kurwill. His dissertation is called Deep Study of Representations and its application to computer vision. After graduating from university, Goodfellow joined Google as part of the Google Brain research team. He then left Google to join the newly founded OpenAI Institute. He returned to Google Research in March 2017. Goodfellow is best known for inventing generative adversarial networks. He is also the lead author of the Deep Learning textbook. At Google, he developed a system that allows Google Maps to automatically transcribe addresses from photos taken by Street View cars, and demonstrated the vulnerabilities of machine learning systems. In 2017, Goodfellow was mentioned in 35 innovators under the age of 35 at MIT Technology Review. In 2019, it was included in the list of 100 global thinkers Foreign Policy.
    [Show full text]
  • Semi-Supervised Neural Architecture Search
    Semi-Supervised Neural Architecture Search 1Renqian Luo,∗2Xu Tan, 2Rui Wang, 2Tao Qin, 1Enhong Chen, 2Tie-Yan Liu 1University of Science and Technology of China, Hefei, China 2Microsoft Research Asia, Beijing, China [email protected], [email protected] 2{xuta, ruiwa, taoqin, tyliu}@microsoft.com Abstract Neural architecture search (NAS) relies on a good controller to generate better architectures or predict the accuracy of given architectures. However, training the controller requires both abundant and high-quality pairs of architectures and their accuracy, while it is costly to evaluate an architecture and obtain its accuracy. In this paper, we propose SemiNAS, a semi-supervised NAS approach that leverages numerous unlabeled architectures (without evaluation and thus nearly no cost). Specifically, SemiNAS 1) trains an initial accuracy predictor with a small set of architecture-accuracy data pairs; 2) uses the trained accuracy predictor to predict the accuracy of large amount of architectures (without evaluation); and 3) adds the generated data pairs to the original data to further improve the predictor. The trained accuracy predictor can be applied to various NAS algorithms by predicting the accuracy of candidate architectures for them. SemiNAS has two advantages: 1) It reduces the computational cost under the same accuracy guarantee. On NASBench-101 benchmark dataset, it achieves comparable accuracy with gradient- based method while using only 1/7 architecture-accuracy pairs. 2) It achieves higher accuracy under the same computational cost. It achieves 94.02% test accuracy on NASBench-101, outperforming all the baselines when using the same number of architectures. On ImageNet, it achieves 23.5% top-1 error rate (under 600M FLOPS constraint) using 4 GPU-days for search.
    [Show full text]
  • Amora: Black-Box Adversarial Morphing Attack
    Amora: Black-box Adversarial Morphing Attack Run Wang1, Felix Juefei-Xu2, Qing Guo1,y, Yihao Huang3, Xiaofei Xie1, Lei Ma4, Yang Liu1,5 1Nanyang Technological University, Singapore 2Alibaba Group, USA 3East China Normal University, China 4Kyushu University, Japan 5Institute of Computing Innovation, Zhejiang University, China ABSTRACT Nowadays, digital facial content manipulation has become ubiq- uitous and realistic with the success of generative adversarial net- works (GANs), making face recognition (FR) systems suffer from unprecedented security concerns. In this paper, we investigate and introduce a new type of adversarial attack to evade FR systems by (a) Presentation spoofing attacks manipulating facial content, called adversarial morphing attack (a.k.a. Amora). In contrast to adversarial noise attack that perturbs pixel intensity values by adding human-imperceptible noise, our proposed adversarial morphing attack works at the semantic level that perturbs pixels spatially in a coherent manner. To tackle the (b) Adversarial noise attack black-box attack problem, we devise a simple yet effective joint dictionary learning pipeline to obtain a proprietary optical flow field for each attack. Our extensive evaluation on two popular FR systems demonstrates the effectiveness of our adversarial morphing attack at various levels of morphing intensity with smiling facial ex- pression manipulations. Both open-set and closed-set experimental (c) Adversarial morphing attack results indicate that a novel black-box adversarial attack based on local deformation is possible, and is vastly different from additive Figure 1: Three typical attacks on the FR systems. (a) presentation spoofing at- noise attacks. The findings of this work potentially pave anew tacks (e.g., print attack [14], disguise attack [50], mask attack [36], and replay research direction towards a more thorough understanding and attack [8]), (b) adversarial noise attack [14, 18, 42], (c) proposed Amora.
    [Show full text]
  • Philosophy News
    2015-2016 PHILOSOPHY NEWS INSIDE Wayne Sumner’s Roseman Lecture 2015 In Memoriam: Brilliant (But Brief) Global Justice: Francis Sparshott Legal Career From Theory to Practice 1926-2015 UNIVERSITY OF TORONTO DEPARTMENT OF PHILOSOPHY 2 Philosophy News CONTENTS Welcome & Reports 3 Brad Inwood 8 Wayne Sumner’s Legal Career 10 Roseman Lecture 2015 12 Ergo 14 Fackenheim Portrait 15 In Memoriam Francis Sparshott 16 People & Awards 18 Book Launch 2015 23 PHILOSOPHY NEWS www.philosophy.utoronto.ca We wish to thank the Editors: Anita Di Giacomo Department of Philosophy generous donors to the Mary Frances Ellison University of Toronto Department of Philosophy, Layout: Dragon Snap Design 170 St. George Street, 4th Floor without whom Philosophy Toronto ON M5R 2M8 News would not be possible. Canada Philosophy News Tel: 416-978-3311 Please support the Department 2015-2016 edition Fax: 416-978-8703 in our endeavours! Spring, 2016 YOUR PRIVACY: The University of Toronto respects your privacy. We do not rent, trade or sell our mailing lists. The information on this form is collected and used for the administration of the University’s advancement activities undertaken pursuant to the University of Toronto Act, 1971. If you have any questions, please refer to <www.utoronto.ca/privacy> or contact the University’s Freedom of Information and Protection of Privacy Coordinator at 416-946-7303, McMurrich Building, Room 201, 12 Queen’s Park Crescent West, Toronto, ON M5S 1A8. If you do not wish to receive future newsletters from the Department of Philosophy, please contact us at 416-978-2139 or at [email protected] University of Toronto 3 Welcome his year, year in the department, Tom served as undergradu- Philosophy ate coordinator and tri-campus TA coordinator, and we T News, the are thankful for his excellent services in these two jobs.
    [Show full text]
  • Current Directions in Psychologi
    CDPXXX10.1177/0963721414556653Gopnik et al.Flexibility and Open-Mindedness in Younger Learners 556653research-article2015 Current Directions in Psychological Science When Younger Learners Can Be Better 2015, Vol. 24(2) 87–92 © The Author(s) 2015 Reprints and permissions: (or at Least More Open-Minded) sagepub.com/journalsPermissions.nav DOI: 10.1177/0963721414556653 Than Older Ones cdps.sagepub.com Alison Gopnik1, Thomas L. Griffiths1, and Christopher G. Lucas2 1Department of Psychology, University of California, Berkeley, and 2School of Informatics, University of Edinburgh, United Kingdom Abstract We describe a surprising developmental pattern we found in studies involving three different kinds of problems and age ranges. Younger learners are better than older ones at learning unusual abstract causal principles from evidence. We explore two factors that might contribute to this counterintuitive result. The first is that as our knowledge grows, we become less open to new ideas. The second is that younger minds and brains are intrinsically more flexible and exploratory, although they are also less efficient as a result. Keywords cognitive development, causal learning, Bayesian models, simulated annealing There is a tension in the field of cognitive development. also suggest that younger learners might sometimes be Children perform worse than adults on many measures. open to more possibilities than older ones. As they grow older, children become more focused, they Theoretically, we propose two possible complemen- plan better, and, of course, they know more. Yet very tary explanations for this pattern, inspired by viewing young children are prodigious learners, and they are children’s learning through the lens of computer science. especially good at learning about causes.
    [Show full text]