Nature-Learning-Machines.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

Nature-Learning-Machines.Pdf NEWS FEATURE THE LEARNING MACHINES Using massive amounts of data to recognize photos and speech, deep-learning computers are taking a big step towards true artificial intelligence. BY NICOLA JONES hree years ago, researchers at the help computers to crack messy problems that secretive Google X lab in Mountain humans solve almost intuitively, from recog- View, California, extracted some nizing faces to understanding language. 10 million still images from YouTube Deep learning itself is a revival of an even videos and fed them into Google Brain older idea for computing: neural networks. — a network of 1,000 computers pro- These systems, loosely inspired by the densely grammed to soak up the world much as a interconnected neurons of the brain, mimic T BRUCE ROLFF/SHUTTERSTOCK human toddler does. After three days looking human learning by changing the strength of for recurring patterns, Google Brain decided, simulated neural connections on the basis of all on its own, that there were certain repeat- experience. Google Brain, with about 1 mil- ing categories it could identify: human faces, lion simulated neurons and 1 billion simu- human bodies and … cats1. lated connections, was ten times larger than Google Brain’s discovery that the Inter- any deep neural network before it. Project net is full of cat videos provoked a flurry of founder Andrew Ng, now director of the jokes from journalists. But it was also a land- Artificial Intelligence Laboratory at Stanford mark in the resurgence of deep learning: a University in California, has gone on to make three-decade-old technique in which mas- deep-learning systems ten times larger again. sive amounts of data and processing power Such advances make for exciting times in 146 | NATURE | VOL 505 | 9 JANUARY 2014 © 2014 Macmillan Publishers Limited. All rights reserved FEATURE NEWS artificial intelligence (AI) — the often-frus- simpler systems, says Malik. Plus, they were Google Brain. The project’s ability to spot cats trating attempt to get computers to think like tricky to work with. “Neural nets were always was a compelling (but not, on its own, commer- humans. In the past few years, companies such a delicate art to manage. There is some black cially viable) demonstration of un­supervised as Google, Apple and IBM have been aggres- magic involved,” he says. The networks needed learning — the most difficult learning task, sively snapping up start-up companies and a rich stream of examples to learn from — like because the input comes without any explana- researchers with deep-learning expertise. a baby gathering information about the world. tory information such as names, titles or For everyday consumers, the results include In the 1980s and 1990s, there was not much categories. But Ng soon became troubled that software better able to sort through photos, digital information available, and it took too few researchers outside Google had the tools to understand spoken commands and translate long for computers to crunch through what work on deep learning. “After many of my talks,” text from foreign languages. For scientists and did exist. Applications were rare. One of the he says, “depressed graduate students would industry, deep-learning computers can search few was a technique — developed by LeCun — come up to me and say: ‘I don’t have 1,000 com- for potential drug candidates, map real neural puters lying around, can I even research this?’” networks in the brain or predict the functions So back at Stanford, Ng started develop- of proteins. “OVER THE NEXT FEW YEARS ing bigger, cheaper deep-learning networks “AI has gone from failure to failure, with bits using graphics processing units (GPUs) — the of progress. This could be another leapfrog,” super-fast chips developed for home-computer says Yann LeCun, director of the Center for WE’LL SEE A FEEDING FRENZY. gaming3. Others were doing the same. “For Data Science at New York University and a about US$100,000 in hardware, we can build deep-learning pioneer. LOTS OF PEOPLE WILL JUMP an 11-billion-connection network, with “Over the next few years we’ll see a feeding 64 GPUs,” says Ng. frenzy. Lots of people will jump on the deep- learning bandwagon,” agrees Jitendra Malik, ON THE DEEP-LEARNING VICTORIOUS MACHINE who studies computer image recognition at But winning over computer-vision scientists the University of California, Berkeley. But BANDWAGON.” would take more: they wanted to see gains on in the long term, deep learning may not win standardized tests. Malik remembers that Hin- the day; some researchers are pursuing other that is now used by banks to read handwritten ton asked him: “You’re a sceptic. What would techniques that show promise. “I’m agnostic,” cheques. convince you?” Malik replied that a victory in says Malik. “Over time people will decide what By the 2000s, however, advocates such as the internationally renowned ImageNet com- works best in different domains.” LeCun and his former supervisor, computer petition might do the trick. scientist Geoffrey Hinton of the University In that competition, teams train computer INSPIRED BY THE BRAIN of Toronto in Canada, were convinced that programs on a data set of about 1 million Back in the 1950s, when computers were new, increases in computing power and an explo- images that have each been manually labelled the first generation of AI researchers eagerly sion of digital data meant that it was time for a with a category. After training, the programs predicted that fully fledged AI was right renewed push. “We wanted to show the world are tested by getting them to suggest labels around the corner. But that optimism faded as that these deep neural networks were really for similar images that they have never seen researchers began to grasp the vast complexity useful and could really help,” says George Dahl, before. They are given five guesses for each test of real-world knowledge — particularly when a current student of Hinton’s. image; if the right answer is not one of those it came to perceptual problems such as what As a start, Hinton, Dahl and several others five, the test counts as an error. Past winners makes a face a human face, rather than a mask tackled the difficult but commercially impor- had typically erred about 25% of the time. In or a monkey face. Hundreds of researchers and tant task of speech recognition. In 2009, the 2012, Hinton’s lab entered the first ever com- graduate students spent decades hand-coding researchers reported2 that after training on petitor to use deep learning. It had an error rate rules about all the different features that com- a classic data set — three hours of taped and of just 15% (ref. 4). puters needed to identify objects. “Coming up transcribed speech — their deep-learning neu- “Deep learning stomped on everything else,” with features is difficult, time consuming and ral network had broken the record for accuracy says LeCun, who was not part of that team. The requires expert knowledge,” says Ng. “You have in turning the spoken word into typed text, a win landed Hinton a part-time job at Google, to ask if there’s a better way.” record that had not shifted much in a decade and the company used the program to update In the 1980s, one better way seemed to be with the standard, rules-based approach. The its Google+ photo-search software in May 2013. deep learning in neural networks. These sys- achievement caught the attention of major Malik was won over. “In science you have tems promised to learn their own rules from players in the smartphone market, says Dahl, to be swayed by empirical evidence, and this scratch, and offered the pleasing symmetry who took the technique to Microsoft during was clear evidence,” he says. Since then, he has of using brain-inspired mechanics to achieve an internship. “In a couple of years they all adapted the technique to beat the record in brain-like function. The strategy called for switched to deep learning.” For example, the another visual-recognition competition5. Many simulated neurons to be organized into sev- iPhone’s voice-activated digital assistant, Siri, others have followed: in 2013, all entrants to eral layers. Give such a system a picture and relies on deep learning. the Image­Net competition used deep learning. the first layer of learning will simply notice all With triumphs in hand for image and speech the dark and light pixels. The next layer might GIANT LEAP recognition, there is now increasing interest in realize that some of these pixels form edges; When Google adopted deep-learning-based applying deep learning to natural-language the next might distinguish between horizon- speech recognition in its Android smartphone understanding — comprehending human tal and vertical lines. Eventually, a layer might operating system, it achieved a 25% reduction discourse well enough to rephrase or answer recognize eyes, and might realize that two eyes in word errors. “That’s the kind of drop you questions, for example — and to translation are usually present in a expect to take ten years to achieve,” says Hin- from one language to another. Again, these are human face (see ‘Facial NATURE.COM ton — a reflection of just how difficult it has currently done using hand-coded rules and recognition’). Learn about another been to make progress in this area. “That’s like statistical analysis of known text. The state- The first deep-learn- approach to brain- ten breakthroughs all together.” of-the-art of such techniques can be seen in ing programs did not like computers: Meanwhile, Ng had convinced Google to let software such as Google Translate, which can perform any better than go.nature.com/fktnso him use its data and computers on what became produce results that are comprehensible (if 9 JANUARY 2014 | VOL 505 | NATURE | 147 © 2014 Macmillan Publishers Limited.
Recommended publications
  • Getting Started with Machine Learning
    Getting Started with Machine Learning CSC131 The Beauty & Joy of Computing Cornell College 600 First Street SW Mount Vernon, Iowa 52314 September 2018 ii Contents 1 Applications: where machine learning is helping1 1.1 Sheldon Branch............................1 1.2 Bram Dedrick.............................4 1.3 Tony Ferenzi.............................5 1.3.1 Benefits of Machine Learning................5 1.4 William Golden............................7 1.4.1 Humans: The Teachers of Technology...........7 1.5 Yuan Hong..............................9 1.6 Easton Jensen............................. 11 1.7 Rodrigo Martinez........................... 13 1.7.1 Machine Learning in Medicine............... 13 1.8 Matt Morrical............................. 15 1.9 Ella Nelson.............................. 16 1.10 Koichi Okazaki............................ 17 1.11 Jakob Orel.............................. 19 1.12 Marcellus Parks............................ 20 1.13 Lydia Sanchez............................. 22 1.14 Tiff Serra-Pichardo.......................... 24 1.15 Austin Stala.............................. 25 1.16 Nicole Trenholm........................... 26 1.17 Maddy Weaver............................ 28 1.18 Peter Weber.............................. 29 iii iv CONTENTS 2 Recommendations: How to learn more about machine learning 31 2.1 Sheldon Branch............................ 31 2.1.1 Course 1: Machine Learning................. 31 2.1.2 Course 2: Robotics: Vision Intelligence and Machine Learn- ing............................... 33 2.1.3 Course
    [Show full text]
  • ACM Bytecast Episode Title: Luis Von Ahn
    Podcast Title: ACM Bytecast Episode Title: Luis Von Ahn Welcome to the ACM Bytecast podcast, where researchers, practitioners, and innovators share about their experiences, lessons, and future visions in the field of computing research. In this episode, host Rashmi Mohan welcomes Luis Von Ahn—founder of Duolingo, the most popular foreign language learning app. The conversation starts off with a quick look at Luis’s background, what he does, and what drew him into this field as a whole. Luis’s mother began his interest in computers at 8 years old and he’s never looked back since. While he started off his training in the field of computer science, he has now started 3 businesses, one of which is Duolingo. Rashmi taps into the history on Luis’s first two businesses before jumping into Duolingo. Learn about the creation of CAPTCHA and re-CAPTCHA. While one of these is a challenge-response test to help computers determine whether the user on websites was human or not, the other protects website from spam. The cru- cial foundation for Luis’s businesses were simply real world problems that he sought to help fix. Luis integrated these real issues in the industry and worked on them from the academic world. Being used across countless websites today, Luis’s work proved pivotal. Rashmi asks Luis about his perspective on integrating academia and business more—don’t miss out on his thoughts! Rashmi shifts the conversation to learning the details involved in the transition from academia to actually running a business and the learning curve associated with this.
    [Show full text]
  • Luis Von Ahn - Episode 14 Transcript
    ACM ByteCast Luis von Ahn - Episode 14 Transcript Rashmi Mohan: This is ACM ByteCast, a podcast series from the Association for Computing Machinery, the world's largest educational and scientific computing society. We talk to researchers, practitioners, and innovators who are at the intersection of computing research and practice. They share their experiences, the lessons they've learned, and their own visions for the future of computing. I am your host, Rashmi Mohan. Rashmi Mohan: If you want to boost your brain power, improve your memory, or enhance your multitasking skills, then you're often recommended to learn a foreign language. For many of us, that option has become a reality, thanks to our next guest and his creation. Luis von Ahn is a serial entrepreneur and Founder and CEO of Duolingo. An accomplished researcher and consulting professor of Computer Science at Carnegie Mellon University, he straddles both worlds seamlessly. He's a winner of numerous awards, including the prestigious Lemelson-MIT Prize and the MacArthur Fellowship often known as The Genius Grant. Louis, welcome to ACM ByteCast. Luis von Ahn: Thank you. Thank you for having me. Rashmi Mohan: Wonderful. I'd love to lead with a simple question that I ask all of my guests. If you could please introduce yourself and talk about what you currently do, and also give us some insight into what drew you into the field of computer science. Luis von Ahn: Sure. So my name is Luis. I am currently the CEO and co-founder of a company called Duolingo. Duolingo is a language learning platform.
    [Show full text]
  • The Future of the Internet and How to Stop It the Harvard Community Has
    The Future of the Internet and How to Stop It The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Jonathan L. Zittrain, The Future of the Internet -- And How to Citation Stop It (Yale University Press & Penguin UK 2008). Published Version http://futureoftheinternet.org/ Accessed July 1, 2016 4:22:42 AM EDT Citable Link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4455262 This article was downloaded from Harvard University's DASH Terms of Use repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms- of-use#LAA (Article begins on next page) YD8852.i-x 1/20/09 1:59 PM Page i The Future of the Internet— And How to Stop It YD8852.i-x 1/20/09 1:59 PM Page ii YD8852.i-x 1/20/09 1:59 PM Page iii The Future of the Internet And How to Stop It Jonathan Zittrain With a New Foreword by Lawrence Lessig and a New Preface by the Author Yale University Press New Haven & London YD8852.i-x 1/20/09 1:59 PM Page iv A Caravan book. For more information, visit www.caravanbooks.org. The cover was designed by Ivo van der Ent, based on his winning entry of an open competition at www.worth1000.com. Copyright © 2008 by Jonathan Zittrain. All rights reserved. Preface to the Paperback Edition copyright © Jonathan Zittrain 2008. Subject to the exception immediately following, this book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S.
    [Show full text]
  • Crowdsourcing
    Crowdsourcing reCAPTCHA Completely Automated Public Turing test to tell Computers and Humans Apart Luis von Ahn • Guatemalan entrepreneur • Consulting Professor at Carnegie Mellon University in Pittsburgh, Pennsylvania. • Known as one of the pioneers of crowdsourcing. The problem: "Anybody can write a program to sign up for millions of accounts, and the idea was to prevent that" Luis von Ahn Ealier CAPTCHAs 2010: Luis invents CAPTCHA Business model B2B/B2C: The Captcha company sells captchas for around 30 $ per 1000 of them The idea Situation before 2007: CAPTCHAs were many and working well The thought of Luis: • Hundreds of thousands of combined human hours were being wasted each day, 200 million captchas were solved daily • Book digitalisation: at the tim Optical Character Recognition (OCR) software couln’t solve 30 % of the ammount of words to be digitalized • A CAPTCHA could be used with the intention of exploiting that to digitalize old books and articles reCAPTCHAs 2007: reCAPTCHA is found and a partnership is established to digitized the previous 20 years of New York ? Times issues within a few months, 13 million articles RESCUED crowdsourcing CARPATHIA ICEBERG reCAPTCHAs In 2009, reCAPTCHA was purchased by Google for an undisclosed amount for Google Books library, which is now one of the largest digital libraries in the world, or identify street names and addresses from Google Maps Street View. If you are not paying for the product, you are the product. Before and after reCAPTCHA : The ESP game and duoligo First of von Ahn projects: giving a name After CAPTCHA: Duoligo to an image: 10 M people playing Providing translations for the web through bought by Google Images a free language learning program.
    [Show full text]
  • Mlops: from Model-Centric to Data-Centric AI
    MLOps: From Model-centric to Data-centric AI Andrew Ng AI system = Code + Data (model/algorithm) Andrew Ng Inspecting steel sheets for defects Examples of defects Baseline system: 76.2% accuracy Target: 90.0% accuracy Andrew Ng Audience poll: Should the team improve the code or the data? Poll results: Andrew Ng Improving the code vs. the data Steel defect Solar Surface detection panel inspection Baseline 76.2% 75.68% 85.05% Model-centric +0% +0.04% +0.00% (76.2%) (75.72%) (85.05%) Data-centric +16.9% +3.06% +0.4% (93.1%) (78.74%) (85.45%) Andrew Ng Data is Food for AI ~1% of AI research? ~99% of AI research? 80% 20% PREP ACTION Source and prepare high quality ingredients Cook a meal Source and prepare high quality data Train a model Andrew Ng Lifecycle of an ML Project Scope Collect Train Deploy in project data model production Define project Define and Training, error Deploy, monitor collect data analysis & iterative and maintain improvement system Andrew Ng Scoping: Speech Recognition Scope Collect Train Deploy in project data model production Define project Decide to work on speech recognition for voice search Andrew Ng Collect Data: Speech Recognition Scope Collect Train Deploy in project data model production Define and collect data “Um, today’s weather” Is the data labeled consistently? “Um… today’s weather” “Today’s weather” Andrew Ng Iguana Detection Example Labeling instruction: Use bounding boxes to indicate the position of iguanas Andrew Ng Making data quality systematic: MLOps • Ask two independent labelers to label a sample of images.
    [Show full text]
  • The Future of the Internet and How to Stop It the Harvard Community Has
    The Future of the Internet and How to Stop It The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Jonathan L. Zittrain, The Future of the Internet -- And How to Stop It (Yale University Press & Penguin UK 2008). Published Version http://futureoftheinternet.org/ Accessed February 18, 2015 9:54:33 PM EST Citable Link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4455262 Terms of Use This article was downloaded from Harvard University's DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms- of-use#LAA (Article begins on next page) YD8852.i-x 1/20/09 1:59 PM Page i The Future of the Internet— And How to Stop It YD8852.i-x 1/20/09 1:59 PM Page ii YD8852.i-x 1/20/09 1:59 PM Page iii The Future of the Internet And How to Stop It Jonathan Zittrain With a New Foreword by Lawrence Lessig and a New Preface by the Author Yale University Press New Haven & London YD8852.i-x 1/20/09 1:59 PM Page iv A Caravan book. For more information, visit www.caravanbooks.org. The cover was designed by Ivo van der Ent, based on his winning entry of an open competition at www.worth1000.com. Copyright © 2008 by Jonathan Zittrain. All rights reserved. Preface to the Paperback Edition copyright © Jonathan Zittrain 2008. Subject to the exception immediately following, this book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S.
    [Show full text]
  • Luis Von Ahn
    Luis von Ahn Professional 2011-present A. Nico Habermann Associate Professor. Computer Science Department, Employment Carnegie Mellon University. 2009-present Staff Research Scientist. Google, Inc. 2006-2011 Assistant Professor. Computer Science Department, Carnegie Mellon University. 2008-2009 Founder and CEO. ReCAPTCHA, Inc. (acquired by Google, Inc., in 2009). 2005-2006 Post-Doctoral Fellow. Computer Science Department, Carnegie Mellon University. Education Carnegie Mellon University, Pittsburgh, PA. Ph.D. in Computer Science, 2005. Advisor: Manuel Blum Thesis Title: Human Computation Carnegie Mellon University, Pittsburgh, PA. M.S. in Computer Science, 2003. Duke University, Durham, NC. B.S. in Mathematics (Summa Cum Laude), 2000. Research I am working to develop a new area of computer science that I call Human Computation. In Interests particular, I build systems that combine the intelligence of humans and computers to solve large-scale problems that neither can solve alone. An example of my work is reCAPTCHA, in which over 750 million people—more than 10% of humanity—have helped digitize books and newspapers. Selected MacArthur Fellow, 2006-2011. Honors Packard Fellow, 2009-2014. Discover Magazine: 50 Best Brains in Science, 2008. Fast Company: 100 Most Creative People in Business, 2010. Silicon.com: 50 Most Influential People in Technology, 2007. Microsoft New Faculty Fellow, 2007. Sloan Fellow, 2009. CAREER Award, National Science Foundation, 2011-2015. Smithsonian Magazine: America’s Top Young Innovators in the Arts and Sciences, 2007. Technology Review’s TR35: Young Innovators Under 35, 2007. IEEE Intelligent Systems “Ten to Watch for the Future of AI,” 2008. Popular Science Magazine Brilliant 10 Scientists of 2006. Herbert A.
    [Show full text]
  • Deep Learning I: Gradient Descent
    Roadmap Intro, model, cost Gradient descent Deep Learning I: Gradient Descent Hinrich Sch¨utze Center for Information and Language Processing, LMU Munich 2017-07-19 Sch¨utze (LMU Munich): Gradient descent 1 / 40 Roadmap Intro, model, cost Gradient descent Overview 1 Roadmap 2 Intro, model, cost 3 Gradient descent Sch¨utze (LMU Munich): Gradient descent 2 / 40 Roadmap Intro, model, cost Gradient descent Outline 1 Roadmap 2 Intro, model, cost 3 Gradient descent Sch¨utze (LMU Munich): Gradient descent 3 / 40 Roadmap Intro, model, cost Gradient descent word2vec skipgram predict, based on input word, a context word Sch¨utze (LMU Munich): Gradient descent 4 / 40 Roadmap Intro, model, cost Gradient descent word2vec skipgram predict, based on input word, a context word Sch¨utze (LMU Munich): Gradient descent 5 / 40 Roadmap Intro, model, cost Gradient descent word2vec parameter estimation: Historical development vs. presentation in this lecture Mikolov et al. (2013) introduce word2vec, estimating parameters by gradient descent. (today) Still the learning algorithm used by default and in most cases Levy and Goldberg (2014) show near-equivalence to a particular type of matrix factorization. (yesterday) Important because it links two important bodies of research: neural networks and distributional semantics Sch¨utze (LMU Munich): Gradient descent 6 / 40 Roadmap Intro, model, cost Gradient descent Gradient descent (GD) Gradient descent is a learning algorithm. Given: a hypothesis space (or model family) an objective function or cost function a training set Gradient descent (GD) finds a set of parameters, i.e., a member of the hypothesis space (or specified model) that performs well on the objective for the training set.
    [Show full text]
  • Invited Talk: Human Computation Luis Von Ahn Carnegie Mellon University Pittsburgh, PA USA [email protected]
    Invited Talk: Human Computation Luis von Ahn Carnegie Mellon University Pittsburgh, PA USA [email protected] ABSTRACT algorithm—it must be proven correct, its efficiency can be Tasks like image recognition are trivial for humans, but analyzed, a more efficient version can supersede a less effi- continue to challenge even the most sophisticated computer cient one, and so on. Instead of using a silicon processor, programs. This talk discusses a paradigm for utilizing hu- these “algorithms” run on a processor consisting of ordi- man processing power to solve problems that computers nary humans interacting with computers over the Internet. cannot yet solve. Traditional approaches to solving such “Games with a purpose” have a vast range of applications problems focus on improving software. I advocate a novel in areas as diverse as security, computer vision, Internet approach: constructively channel human brainpower using accessibility, adult content filtering, and Internet search. computer games. For example, the ESP Game, described in Two such games under development at Carnegie Mellon this talk, is an enjoyable online game – many people play University, the ESP Game and Peekaboom, demonstrate over 40 hours a week – and when people play, they help how humans, as they play, can solve problems that com- label images on the Web with descriptive keywords. These puters can’t yet solve. keywords can be used to significantly improve the accuracy of image search. People play the game not because they LABELING RANDOM IMAGES want to help, but because they enjoy it. Several important online applications such as search en- gines and accessibility programs for the visually impaired Categories and Subject Descriptors require accurate image descriptions.
    [Show full text]
  • Jonathan Zittrain's “The Future of the Internet: and How to Stop
    The Future of the Internet and How to Stop It The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Jonathan L. Zittrain, The Future of the Internet -- And How to Stop It (Yale University Press & Penguin UK 2008). Published Version http://futureoftheinternet.org/ Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:4455262 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA YD8852.i-x 1/20/09 1:59 PM Page i The Future of the Internet— And How to Stop It YD8852.i-x 1/20/09 1:59 PM Page ii YD8852.i-x 1/20/09 1:59 PM Page iii The Future of the Internet And How to Stop It Jonathan Zittrain With a New Foreword by Lawrence Lessig and a New Preface by the Author Yale University Press New Haven & London YD8852.i-x 1/20/09 1:59 PM Page iv A Caravan book. For more information, visit www.caravanbooks.org. The cover was designed by Ivo van der Ent, based on his winning entry of an open competition at www.worth1000.com. Copyright © 2008 by Jonathan Zittrain. All rights reserved. Preface to the Paperback Edition copyright © Jonathan Zittrain 2008. Subject to the exception immediately following, this book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S.
    [Show full text]
  • Face Recognition in Unconstrained Conditions: a Systematic Review
    Face Recognition in Unconstrained Conditions: A Systematic Review ANDREW JASON SHEPLEY, Charles Darwin University, Australia ABSTRACT Face recognition is a biometric which is attracting significant research, commercial and government interest, as it provides a discreet, non-intrusive way of detecting, and recognizing individuals, without need for the subject’s knowledge or consent. This is due to reduced cost, and evolution in hardware and algorithms which have improved their ability to handle unconstrained conditions. Evidently affordable and efficient applications are required. However, there is much debate over which methods are most appropriate, particularly in the context of the growing importance of deep neural network-based face recognition systems. This systematic review attempts to provide clarity on both issues by organizing the plethora of research and data in this field to clarify current research trends, state-of-the-art methods, and provides an outline of their benefits and shortcomings. Overall, this research covered 1,330 relevant studies, showing an increase of over 200% in research interest in the field of face recognition over the past 6 years. Our results also demonstrated that deep learning methods are the prime focus of modern research due to improvements in hardware databases and increasing understanding of neural networks. In contrast, traditional methods have lost favor amongst researchers due to their inherent limitations in accuracy, and lack of efficiency when handling large amounts of data. Keywords: unconstrained face recognition, deep neural networks, feature extraction, face databases, traditional handcrafted features 1 INTRODUCTION The development of accurate and efficient face recognition systems for use in unconstrained conditions is an area of high research interest.
    [Show full text]