6Th USENIX Symposium on Networked Systems Design And

Total Page:16

File Type:pdf, Size:1020Kb

6Th USENIX Symposium on Networked Systems Design And Announcement and Call for Papers 6th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’09) Sponsored by USENIX in cooperation with ACM SIGCOMM and ACM SIGOPS http://www.usenix.org/nsdi09 April 22–24, 2009 Boston, MA Important Dates Steering Committee Paper titles and abstracts due: October 3, 2008, Thomas Anderson, University of Washington 6:00 p.m. EDT Brian Noble, University of Michigan Complete paper submissions due: October 10, 2008, Jennifer Rexford, Princeton University 6:00 p.m. EDT (hard deadline) Mike Schroeder, Microsoft Research Notification of acceptance: December 19, 2008 Chandu Thekkath, Microsoft Research Papers due for shepherding: February 5, 2009 Amin Vahdat, University of California, San Diego Final papers due: February 25, 2009 Ellie Young, USENIX Poster proposals due: March 1, 2009 Notification to poster presenters: March 15, 2009 Overview NSDI focuses on the design principles and practical evalua - Conference Organizers tion of large-scale networked and distributed systems. Sys - Program Co-Chairs tems as diverse as Internet routing, peer-to-peer and overlay Jennifer Rexford, Princeton University networks, sensor networks, Web-based systems, and measure - Emin Gün Sirer, Cornell University ment infrastructures share a set of common challenges. Prog- ress in any of these areas requires a deep understanding of Program Committee how researchers are addressing the challenges of large-scale Miguel Castro, Microsoft Research systems in other contexts. Our goal is to bring together re- Jeff Dean, Google, Inc. searchers from across the networking and systems commu - Nick Feamster, Georgia Institute of Technology nity—including communication, distributed systems, and Michael J. Freedman, Princeton University operating systems—to foster a broad approach to addressing Steven D. Gribble, University of Washington our common research challenges. Krishna Gummadi, Max Planck Institute for Software Systems Steven Hand, University of Cambridge Topics Farnam Jahanian, University of Michigan Dina Katabi, Massachusetts Institute of Technology NSDI will provide a high-quality, single-track forum for pre - Arvind Krishnamurthy, University of Washington senting new results and discussing ideas that overlap these Bruce Maggs, Carnegie Mellon University/Akamai disciplines. We seek a broad variety of work that furthers the Petros Maniatis, Intel Research Berkeley knowledge and understanding of the networked systems com - Nick McKeown, Stanford University munity as a whole, continues a significant research dialog, or Greg Minshall pushes the architectural boundaries of large-scale network Michael Mitzenmacher, Harvard University services. We solicit papers describing original and previously Jeff Mogul, HP Labs unpublished research. Specific topics of interest include but Venugopalan Ramasubramanian, Microsoft Research are not limited to: Pablo Rodriguez, Spain Telefónica • Self-organizing, autonomous, and federated networked Kobus van der Merwe, AT&T Labs—Research systems Geoffrey M. Voelker, University of California, San Diego • Scalable techniques for providing high availability and Matt Welsh, Harvard University reliability Hui Zhang, Carnegie Mellon University/Rinera • Energy-efficient computing in networked environments Yuanyuan Zhou, University of Illinois at Urbana-Champaign • Clean-slate approaches to communication systems • Distributed storage, caching, and query processing • Security, robustness, and fault-tolerance in networked environments • Overlays and peer-to-peer systems them. In some cases, program committees may share infor - • Systems and protocols for mobile and wireless systems mation about submitted papers with other conference chairs • Protocols and OS support for sensor networking and journal editors to ensure the integrity of papers under • Novel operating system support for networked systems consideration. • Virtualization and resource management for networked Previous publication at a workshop is acceptable as long systems as the NSDI submission includes substantial new material. • Design and evaluation of large-scale networked system For instance, submitting a paper that provides a full evalua - testbeds tion of an idea that was previously sketched in a 5-page posi - • Network measurements, workload, and topology charac - tion paper is acceptable. Authors of such papers should cite terization the prior workshop paper and clearly state the submission’s • Managing, debugging, and diagnosing problems in net - contribution relative to the prior workshop publication. worked systems Authors uncertain whether their submission meets • Practical protocols and algorithms for networked sys - USENIX’s guidelines should contact the Program Co- tems Chairs, [email protected], or the USENIX office, • Addressing novel challenges of the developing world [email protected]. • Experience with deployed networked systems Papers accompanied by nondisclosure agreement forms will not be considered. All submissions will be treated as What to Submit confidential prior to publication on the USENIX NSDI ’09 Submissions must be full papers, at most 14 single-spaced Web site, http://www.usenix.org/nsdi09. 8.5" x 11" pages, including figures, tables, and references, One author per paper will receive a registration discount two-column format, using 10-point type on 12-point (single- of $200. USENIX will offer a complimentary registration spaced) leading, with a maximum text block of 6.5" wide x upon request. 9" deep with .25" intercolumn space. Papers that do not meet the size and formatting requirements will not be reviewed. Best Paper Awards Submissions will be judged on originality, significance, Awards will be given for the best paper and the best paper interest, clarity, relevance, and correctness. for which a student is the lead author. NSDI is single-blind, meaning that authors should include their names on their paper submissions and do not need to Birds-of-a-Feather Sessions obscure references to their existing work. Birds-of-a-Feather sessions (BoFs) are informal gatherings Authors must submit their paper’s title and abstract by organized by attendees interested in a particular topic. BoFs October 3, 2008, and the corresponding full paper is due by will be held in the evening. BoFs may be scheduled in ad- October 10, 2008 (hard deadline). All papers must be sub - vance by emailing the USENIX Conference Department at mitted via the Web form on the Call for Papers Web site, [email protected]. BoFs may also be scheduled at the confer - http://www.usenix.org/nsdi09/cfp. Accepted papers may be ence. shepherded through an editorial review process by a member of the Program Committee. Based on initial feedback from Poster Session the Program Committee, authors of shepherded papers will NSDI will be continuing its long-running tradition of show - submit an editorial revision of their paper to their Program casing early research in progress at a poster session. New, Committee shepherd by February 5, 2009. The shepherd will ongoing work, early findings from measurement studies, and review the paper and give the author additional comments. demonstrations of newly deployed systems are highly en- All authors (shepherded or not) will produce a final, print - couraged. We are particularly interested in presentations of able PDF and the equivalent HTML by February 25, 2009, student work. To submit a poster, please send a proposal, one for the conference Proceedings. page or less, by March 1, 2009, to nsdi09posters@usenix All papers will be available online to registered attendees .org. The poster session chairs will send back decisions by prior to the conference and will be available online to March 15, 2009. everyone starting on April 22, 2009. If your accepted paper should not be published prior to the event, please notify Registration Materials [email protected]. Simultaneous submission of the same work to multiple Complete program and registration information will be venues, submission of previously published work, and pla - available in January 2009 on the conference Web site. If you giarism constitute dishonesty or fraud. USENIX, like other would like to receive the latest USENIX conference infor - scientific and technical conferences and journals, prohibits mation, please join our mailing list at http://www.usenix these practices and may, on the recommendation of a pro - .org/about/mailing.html. gram chair, take action against authors who have committed Rev. 2/2/09.
Recommended publications
  • The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design
    The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design Jeffrey Dean Google Research [email protected] Abstract The past decade has seen a remarkable series of advances in machine learning, and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas, including computer vision, speech recognition, language translation, and natural language understanding tasks. This paper is a companion paper to a keynote talk at the 2020 International Solid-State Circuits Conference (ISSCC) discussing some of the advances in machine learning, and their implications on the kinds of computational devices we need to build, especially in the post-Moore’s Law-era. It also discusses some of the ways that machine learning may also be able to help with some aspects of the circuit design process. Finally, it provides a sketch of at least one interesting direction towards much larger-scale multi-task models that are sparsely activated and employ much more dynamic, example- and task-based routing than the machine learning models of today. Introduction The past decade has seen a remarkable series of advances in machine learning (ML), and in particular deep learning approaches based on artificial neural networks, to improve our abilities to build more accurate systems across a broad range of areas [LeCun et al. 2015]. Major areas of significant advances ​ ​ include computer vision [Krizhevsky et al. 2012, Szegedy et al. 2015, He et al. 2016, Real et al. 2017, Tan ​ ​ ​ ​ ​ ​ and Le 2019], speech recognition [Hinton et al.
    [Show full text]
  • The Pagerank Algorithm and Application on Searching of Academic Papers
    The PageRank algorithm and application on searching of academic papers Ping Yeh Google, Inc. 2009/12/9 Department of Physics, NTU Disclaimer (legal) The content of this talk is the speaker's personal opinion and is not the opinion or policy of his employer. Disclaimer (content) You will not hear physics. You will not see differential equations. You will: ● get a review of PageRank, the algorithm used in Google's web search. It has been applied to evaluate journal status and influence of nodes in a graph by researchers, ● see some linear algebra and Markov chains associated with it, and ● see some results of applying it to journal status. Outline Introduction Google and Google search PageRank algorithm for ranking web pages Using MapReduce to calculate PageRank for billions of pages Impact factor of journals and PageRank Conclusion Google The name: homophone to the word “Googol” which means 10100. The company: ● founded by Larry Page and Sergey Brin in 1998, ● ~20,000 employees as of 2009, ● spread in 68 offices around the world (23 in N. America, 3 in Latin America, 14 in Asia Pacific, 23 in Europe, 5 in Middle East and Africa). The mission: “to organize the world's information and make it universally accessible and useful.” Google Services Sky YouTube iGoogle web search talk book search Chrome calendar scholar translate blogger.com Android product news search maps picasaweb video groups Gmail desktop reader Earth Photo by mr.hero on panoramio (http://www.panoramio.com/photo/1127015)‏ 6 Google Search http://www.google.com/ or http://www.google.com.tw/ The abundance problem Quote Langville and Meyer's nice book “Google's PageRank and beyond: the science of search engine rankings”: The men in Jorge Luis Borges’ 1941 short story, “The Library of Babel”, which describes an imaginary, infinite library.
    [Show full text]
  • Summer 2017 Issue 11.1
    the THE MAGAZINE OF CARNEGIE MELLON UNIVERSITY’S SCHOOL OF COMPUTER SCIENCE 60 YEARS IN THE MAKING CMU AI is Here SUMMER 2017 ISSUE 11.1 SUMMER 2017 cvr1 Iain Mathews Bhat, Matthews Win Academy Awards for Technical Achievement Computer Science at CMU School of Computer Science alumnus Kiran Bhat and underpins divergent fields and endeavors in today’s world, former Robotics Institute faculty member Iain Matthews all of which LINK SCS to profound received Oscars on February 11, from the Academy of advances in art, culture, nature, Motion Picture Arts and Science, for their work in capturing the sciences and beyond. facial performances. Bhat earned his doctorate in robotics in 2004, and helped design and develop the Industrial Light and Magic facial performance-capture solving system, which transfers facial performances from actors to digital characters in large-scale productions. The system was used in “Rogue One: A Star Wars Story” to resurrect the role of Grand Moff Tarkin, played by the late actor Peter Cushing, as well as to capture Mark Ruffalo’s expressions for his character, the Hulk, in “The Avengers.” Matthews, a post-doctoral researcher and former faculty member in the Robotics Institute working on face modeling and vision-based tracking, was recognized along with his team for the design, engineering and development of the facial-performance capture and solving system at Weta Digital, known as FACETS. Matthews spent two years helping to develop the facial motion capture system for “Avatar” and “Tintin.” With Bhat’s and Matthews’ wins, Carnegie Mellon alumni and faculty have received nine Academy Awards to date.
    [Show full text]
  • The Pile: an 800GB Dataset of Diverse Text for Language Modeling Leo Gao Stella Biderman Sid Black Laurence Golding
    The Pile: An 800GB Dataset of Diverse Text for Language Modeling Leo Gao Stella Biderman Sid Black Laurence Golding Travis Hoppe Charles Foster Jason Phang Horace He Anish Thite Noa Nabeshima Shawn Presser Connor Leahy EleutherAI [email protected] Abstract versity leads to better downstream generalization capability (Rosset, 2019). Additionally, large-scale Recent work has demonstrated that increased training dataset diversity improves general language models have been shown to effectively cross-domain knowledge and downstream gen- acquire knowledge in a novel domain with only eralization capability for large-scale language relatively small amounts of training data from that models. With this in mind, we present the domain (Rosset, 2019; Brown et al., 2020; Carlini Pile: an 825 GiB English text corpus tar- et al., 2020). These results suggest that by mix- geted at training large-scale language mod- ing together a large number of smaller, high qual- els. The Pile is constructed from 22 diverse ity, diverse datasets, we can improve the general high-quality subsets—both existing and newly cross-domain knowledge and downstream general- constructed—many of which derive from aca- demic or professional sources. Our evalua- ization capabilities of the model compared to mod- tion of the untuned performance of GPT-2 and els trained on only a handful of data sources. GPT-3 on the Pile shows that these models struggle on many of its components, such as To address this need, we introduce the Pile: a academic writing. Conversely, models trained 825:18 GiB English text dataset designed for train- on the Pile improve significantly over both ing large scale language models.
    [Show full text]
  • Device Placement with Reinforcement Learning
    Special Topics: CSci 8980 Machine Learning in Computer Systems Jon B. Weissman ([email protected]) Department of Computer Science University of Minnesota Introduction • Introductions – all • Who are you? • What interests you and why are you here? 2 Introduction (cont’d) • What is this course about? – machine learning • Interpreted broadly: learning from data to improve … – computer systems • Interpreted broadly: compilers, databases, networks, OS, mobile, security, … (not finding a boat in an image) 3 Confession • If you took a ML course, you know more than me about it • Interestingly … – Took an AI course from Geoff Hinton – Did an M.S. on neural networks eons ago 4 Web Site • http://www- users.cselabs.umn.edu/classes/Spring- 2019/csci8980/ 5 Technical Course Goals • Learn a “little” about ML and DL techniques – Understand their scope of applicability • Learn about one or more areas of computer systems in more detail • Learn how ML/DL can benefit computer systems 6 Non-Technical Course Goals • Learn how to write critiques (blogs) • Learn how to present papers and lead discussions • Do a team research project – Idea formation – Writeup – Experiment – Present – (fingers-crossed) publish a (workshop) paper 7 Major Topics • Machine learning Introduction • Databases • Networking • Scheduling • Power management • Storage • Compilers/Architecture • Fault tolerance • IOT/mobile 8 Course structure • Grading … – Presentations: 2 (1 big, 1 small) of them (10% each) – Take-home mid-term: 20% – Final project: 30% – Written critiques (blogging): 10%
    [Show full text]
  • Large-Scale Deep Learning with Tensorflow
    Large-Scale Deep Learning With TensorFlow Jeff Dean Google Brain team g.co/brain In collaboration with many other people at Google What is the Google Brain Team? ● Research team focused on long term artificial intelligence research ○ Mix of computer systems and machine learning research expertise ○ Pure ML research, and research in context of emerging ML application areas: ■ robotics, language understanding, healthcare, ... g.co/brain We Disseminate Our Work in Many Ways ● By publishing our work ○ See papers at research.google.com/pubs/BrainTeam.html ● By releasing TensorFlow, our core machine learning research system, as an open-source project ● By releasing implementations of our research models in TensorFlow ● By collaborating with product teams at Google to get our research into real products What Do We Really Want? ● Build artificial intelligence algorithms and systems that learn from experience ● Use those to solve difficult problems that benefit humanity What do I mean by understanding? What do I mean by understanding? What do I mean by understanding? What do I mean by understanding? Query [ car parts for sale ] What do I mean by understanding? Query [ car parts for sale ] Document 1 … car parking available for a small fee. … parts of our floor model inventory for sale. Document 2 Selling all kinds of automobile and pickup truck parts, engines, and transmissions. Example Needs of the Future ● Which of these eye images shows symptoms of diabetic retinopathy? ● Find me all rooftops in North America ● Describe this video in Spanish
    [Show full text]
  • Towards Teachable Conversational Agents
    Towards Teachable Conversational Agents Nalin Chhibber Edith Law Department of Computer Science Department of Computer Science University of Waterloo University of Waterloo Waterloo, Canada Waterloo, Canada [email protected] [email protected] Abstract The traditional process of building interactive machine learning systems can be viewed as a teacher-learner interaction scenario where the machine-learners are trained by one or more human-teachers. In this work, we explore the idea of using a conversational interface to investigate the interaction between human-teachers and interactive machine-learners. Specifically, we examine whether teachable AI agents can reliably learn from human-teachers through conversational interactions, and how this learning compare with traditional supervised learning algorithms. Results validate the concept of teachable conversational agents and highlight the factors relevant for the development of machine learning systems that intend to learn from conversational interactions. 1 Introduction Recent progress in artificial intelligence has resulted in the development of intelligent agents that can direct their activities towards achieving a goal. Moreover, rapidly advancing infrastructure around conversational technologies has resulted in a wide range of applications around these agents, including intelligent personal assistants (like Alexa, Cortana , Siri, and Google Assistant), guides in public places (like Edgar [1], Ada and Grace [2]), smart-home controllers [3], and virtual assistants in cars [4]. This growing ecosystem of applications supporting conversational capabilities has the potential to affect all aspects of our lives, including healthcare, education, work, and leisure. Consequently, agent-based interactions has attracted a lot of attention from various research communities [5–8, 3]. The success of these agents will depend on their ability to efficiently learn from non-expert humans in a natural way.
    [Show full text]
  • Platform Competition in an Age of Machine Learning
    DISRUPTIVE INCUMBENTS: PLATFORM COMPETITION IN AN AGE OF MACHINE LEARNING C. Scott Hemphill * Recent advances in machine learning have reinforced the competi- tive position of leading online platforms. This Essay identifies two im- portant sources of platform rivalry and proposes ways to maximize their competitive potential under existing antitrust law. A nascent competitor is a threatening new entrant that, in time, might become a full-fledged platform rival. A platform’s acquisition of a nascent competitor should be prohibited as an unlawful acquisition or maintenance of monopoly. A disruptive incumbent is an established firm—often another plat- form—that introduces fresh competition in an adjacent market. Anti- trust enforcers should take a more cautious approach, on the margin, when evaluating actions taken by a disruptive incumbent to compete with an entrenched platform. INTRODUCTION The leading online platforms—Google in search, Facebook in social network services, and Amazon in e-commerce—benefit from economies of scale and access to user data that are difficult for rivals to replicate. These barriers are reinforced by advances in machine learning, a set of artificial intelligence (AI) techniques1 that use models to “learn” desired behavior from “examples rather than instructions.”2 This Essay considers how competition might be enhanced, notwithstanding these advantages, under existing antitrust law.3 * Moses H. Grossman Professor of Law, New York University School of Law. I thank John Asker, Adam Cox, Harry First, Jacob Gersen, Jeannie Gersen, Bert Huang, Avery Katz, Benedict Kingsbury, Bhaven Sampat, Tim Wu, and audiences at Columbia, ETH Zurich, and NYU, for helpful comments. Tim Keegan, Ryan Knox, Ina Kosova, Alison Perry, and David Stein provided outstanding research assistance.
    [Show full text]
  • UW CSE's Annual Donor- Scholar Luncheon
    most significant bits newsletter of uw computer science & engineering volume 24, number 1, spring 2014 university of washington UW core team (clockwise from lower left): Tom Daniel (Biology), Andy Connolly (Astronomy), m s b Bill Howe (CSE), Ed Lazowska (CSE), Randy LeVeque (Applied Mathematics), Tyler McCormick (Statistics + Sociology), Cecilia Aragon (Human Centered Design & Engineering), Ginger Armbrust (Oceanography), Sarah Loebman (Astronomy). Missing: Magda Balazinska (CSE), Josh Blumenstock (iSchool), Mark Ellis (Geography), Carlos Guestrin (CSE), Thomas Richardson (Statistics), Werner contents Stuetzle (Statistics), John Vidale (Earth & Space Sciences). Letter from the chair 2 Big Data, Enormous Opportunity Two join CSE faculty 3 Every credit card transaction, embedded sensor stream from sea floor or CSE Alumni Achievement Awards 4 smartphone, web click on a social media site, or DNA sequencer adds to the petabytes of heterogeneous, noisy datasets now available to researchers. Figuring out how to Diamond Award Winner 6 extract value from this Big Data lies at the heart of 21st century discovery. Highlight: Usermind 7 On November 12, 2013 as the featured talk at a White House Office of Science and Highlight: WibiData 10 Technology Policy (OSTP) event, a five-year, $37.8 million award from the Gordon and 2014 donor luncheon 11 Betty Moore Foundation and the Alfred P. Sloan Foundation was made to the University of Washington, the University of California at Berkeley, and New York University. The Big data scenario: oceanography 12
    [Show full text]
  • Why More Tech Companies Should Put AI Visionaries in the Executive Suite
    Why more tech companies should put AI visionaries in the executive suite Why more tech companies should put AI visionaries in the executive suite by, James Kobielus May 3rd, 2018 Do enterprises really need chief artificial intelligence officers? In most industries, the correct answer would probably be no. For most businesses, AI may never rise to a level of strategic importance that requires a dedicated executive reporting directly to the chief executive. Even so, some high-tech companies might want to consider it. Elevating an AI expert to […] © 2018 Wikibon Research | Page 1 Why more tech companies should put AI visionaries in the executive suite Do enterprises really need chief artificial intelligence officers? In most industries, the correct answer would probably be no. For most businesses, AI may never rise to a level of strategic importance that requires a dedicated executive reporting directly to the chief executive. Even so, some high-tech companies might want to consider it. Elevating an AI expert to C-level status, though often quite expensive, may become strategically necessary if a business’ survival depends on it. In those “bet the business” cases, it may make sense to appoint an executive to oversee an AI center of excellence, direct high-profile AI-based business initiatives, fund ongoing research and development into the enabling technologies and evangelize it among the lines of business. Here’s a SiliconANGLE article from last fall in which Teradata Corp. marshals quantitative research to argue that the day is approaching when C-level AI-focused executives may become essential. And here’s a good interview of former Baidu Inc.
    [Show full text]
  • Machine Intelligence at Google Scale: Vision/Speech API, Tensorflow and Cloud ML Kaz Sato
    Machine Intelligence at Google Scale: Vision/Speech API, TensorFlow and Cloud ML Kaz Sato Staff Developer Advocate +Kazunori Sato Tech Lead for Data & Analytics @kazunori_279 Cloud Platform, Google Inc. What we’ll cover Deep learning and distributed training Large scale neural network on Google Cloud Cloud Vision API and Speech API TensorFlow and Cloud Machine Learning Deep Learning and Distributed Training From: Andrew Ng DNN = a large matrix ops a few GPUs >> CPU (but it still takes days to train) a supercomputer >> a few GPUs (but you don't have a supercomputer) You need Distributed Training on the cloud Google Brain. Large scale neural network on Google Cloud Google Cloud is The Datacenter as a Computer Enterprise Jupiter network 10 GbE x 100 K = 1 Pbps Consolidates servers with microsec latency Borg No VMs, pure containers 10K - 20K nodes per Cell DC-scale job scheduling CPUs, mem, disks and IO Google Cloud + Neural Network = Google Brain 13 The Inception model (GoogLeNet, 2015) What's the scalability of Google Brain? "Large Scale Distributed Systems for Training Neural Networks", NIPS 2015 ○ Inception / ImageNet: 40x with 50 GPUs ○ RankBrain: 300x with 500 nodes Large-scale neural network for everyone Cloud Vision API Pre-trained models. No ML skill required REST API: receives images and returns a JSON $2.5 or $5 / 1,000 units (free to try) Public Beta - cloud.google.com/vision Demo 22 Cloud Speech API Pre-trained models. No ML skill required REST API: receives audio and returns texts Supports 80+ languages Streaming or non-streaming
    [Show full text]
  • Where We're At
    Where We’re At Progress of AI and Related Technologies How Big is the Field of AI? ● +50% publications/5 years ● 106 journals, 7,125 organizations ● Approx $56M funding from NSF in 2011, but ● Most funding is from private companies and hard to tally ● Large influx of corporate funding recently Deep Learning Causal Networks Geoffrey E. Hinton, Simon Pearl, Judea. Causality: Models, Osindero, Yee-Whye Teh: A fast Reasoning and Inference (2000) algorithm for deep belief nets (2006) Recent Progress DeepMind ● Combined deep learning (convolutional neural net) with reinforcement learning to play 7 Atari games ● Sold to Google for >$400M in Jan 2014 Demis Hassabis required an “AI ethics board” as a condition of sale ● Demis says: 50% chance of AGI within 15yrs Google ● Commercially deployed: language translation, speech recognition, OCR, image classification ● Massive software and hardware infrastructure for large- scale computation ● Large datasets: Web, Books, Scholar, ReCaptcha, YouTube ● Prominent researchers: Ray Kurzweil, Peter Norvig, Andrew Ng, Geoffrey Hinton, Jeff Dean ● In 2013, bought: DeepMind, DNNResearch, 8 Robotics companies Other Groups Working on AI IBM Watson Research Center Beat the world champion at Jeopardy (2011) Located in Cambridge Facebook New lab headed by Yann LeCun (famous for: backpropagation algorithm, convolutional neural nets), announced Dec 2013 Human Brain Project €1.1B in EU funding, aimed at simulating complete human brain on supercomputers Other Groups Working on AI Blue Brain Project Began 2005, simulated
    [Show full text]