Machine Learning and AI Via Brain Simulations Andrew Ng Stanford University & Google

Machine Learning and AI Via Brain Simulations Andrew Ng Stanford University & Google

Machine Learning and AI via Brain simulations Andrew Ng Stanford University & Google Thanks to: Stanford: Adam Coates Quoc Le Honglak Lee Andrew Saxe Andrew Maas Chris Manning Jiquan Ngiam Richard Socher Will Zou Google: Kai Chen Greg Corrado Jeff Dean Matthieu Devin Rajat Monga Marc’Aurelio Paul Tucker Kay Le Ranzato Andrew Ng 400100,000 This talk The idea of “deep learning.” Using brain simulations, hope to: - Make learning algorithms much better and easier to use. - Make revolutionary advances in machine learning and AI. Vision is not only mine; shared with many researchers: E.g., Samy Bengio, Yoshua Bengio, Tom Dean, Jeff Dean, Nando de Freitas, Jeff Hawkins, Geoff Hinton, Quoc Le, Yann LeCun, Honglak Lee, Tommy Poggio, Ruslan Salakhutdinov, Josh Tenenbaum, Kai Yu, Jason Weston, …. I believe this is our best shot at progress towards real AI. Andrew Ng What do we want computers to do with our data? Images/video Label: “Motorcycle” Suggest tags Image search … Speech recognition Audio Music classification Speaker identification … Text Web search Anti-spam Machine translation … Andrew Ng Computer vision is hard! Motorcycle Motorcycle Motorcycle Motorcycle Motorcycle Motorcycle Motorcycle Motorcycle Motorcycle Andrew Ng What do we want computers to do with our data? Images/video Label: “Motorcycle” Suggest tags Image search … Speech recognition Audio Speaker identification Music classification … Web search Text Anti-spam Machine translation … Machine learning performs well on many of these problems, but is a lot of work. What is it about machine learning that makes it so hard to use? Andrew Ng Machine learning for image classification “Motorcycle” This talk: Develop ideas using images and audio. Ideas apply to other problems (e.g., text) too. Andrew Ng Why is this hard? You see this: But the camera sees this: Andrew Ng Machine learning and feature representations pixel 1 Learning algorithm pixel 2 Input Motorbikes Raw image “Non”-Motorbikes pixel 2 pixel 1 Andrew Ng Machine learning and feature representations pixel 1 Learning algorithm pixel 2 Input Motorbikes Raw image “Non”-Motorbikes pixel 2 pixel 1 Andrew Ng Machine learning and feature representations pixel 1 Learning algorithm pixel 2 Input Motorbikes Raw image “Non”-Motorbikes pixel 2 pixel 1 Andrew Ng What we want handlebars Feature Learning wheel representation algorithm Input E.g., Does it have Handlebars? Wheels? Motorbikes Raw image “Non”-Motorbikes Features pixel 2 Wheels pixel 1 Handlebars Andrew Ng Computing features in computer vision But… we don’t have a handlebars detector. So, researchers try to hand-design features to capture various statistical properties of the image. 0.1 0.7 0.4 0.6 0.1 0.7 0.1 0.4 0.4 0.6 0.5 0.5 0.1 0.4 0.1 0.6 0.5 0.5 0.7 0.5 … 0.2 0.3 0.4 0.4 Final feature Find edges Sum up edge vector at four strength in orientations each quadrant Andrew Ng Feature representations Feature Learning Representation algorithm Input Andrew Ng How is computer perception done? Images/video Image Vision features Detection Audio Image Low-level Grasp point features Audio Audio features Speaker ID Text classification, Text Machine translation, Information retrieval, Text features .... Text Andrew Ng Feature representations Feature Learning Representation algorithm Input Andrew Ng Computer vision features SIFT Spin image HoG RIFT Textons GLOH Andrew Ng Audio features Spectrogram MFCC Flux ZCR Rolloff Andrew Ng NLP features Named entity recognition Stemming ParserComing features up with features is difficult, time- consuming, requires expert knowledge. When working applications of learning, we spend a lot of time tuning the features. Part of speech Anaphora Ontologies (WordNet) Andrew Ng Feature representations Learning Input Feature Representation algorithm Andrew Ng The “one learning algorithm” hypothesis Auditory Cortex Auditory cortex learns to see [Roe et al., 1992] Andrew Ng The “one learning algorithm” hypothesis Somatosensory Cortex Somatosensory cortex learns to see [Metin & Frost, 1989] Andrew Ng Sensor representations in the brain Seeing with your tongue Human echolocation (sonar) rd Haptic belt: Direction sense Implanting a 3 eye [BrainPort; Welsh & Blasch, 1997; Nagel et al., 2005; Constantine-Paton & Law,Andrew 2009 Ng] On two approaches to computer perception The adult visual system computes an incredibly complicated function of the input. We can try to directly implement most of this incredibly complicated function (hand-engineer features). Can we learn this function instead? A trained learning algorithm (e.g., neural network, boosting, decision tree, SVM,…) is very complex. But the learning algorithm itself is usually very simple. The complexity of the trained algorithm comes from the data, not the algorithm. Andrew Ng Learning input representations Find a better way to represent images than pixels. Andrew Ng Learning input representations Find a better way to represent audio. Andrew Ng Feature learning problem • Given a 14x14 image patch x, can represent it using 196 real numbers. 255 98 93 87 89 91 48 … • Problem: Can we find a learn a better feature vector to represent this? Andrew Ng Self-taught learning (Unsupervised Feature Learning) … Unlabeled images Testing: What is this? Motorcycles Not motorcycles Andrew Ng [This uses unlabeled data. One can learn the features from labeled data too.] Self-taught learning (Unsupervised Feature Learning) … Unlabeled images Testing: What is this? Motorcycles Not motorcycles Andrew Ng [This uses unlabeled data. One can learn the features from labeled data too.] First stage of visual processing: V1 V1 is the first stage of visual processing in the brain. Neurons in V1 typically modeled as edge detectors: Neuron #1 of visual cortex Neuron #2 of visual cortex (model) (model) Andrew Ng Feature Learning via Sparse Coding Sparse coding (Olshausen & Field,1996). Originally developed to explain early visual processing in the brain (edge detection). Input: Images x(1), x(2), …, x(m) (each in Rn x n) n x n Learn: Dictionary of bases f1, f2, …, fk (also R ), so that each input x can be approximately decomposed as: k x aj fj j=1 s.t. aj’s are mostly zero (“sparse”) Andrew Ng [NIPS 2006, 2007] Sparse coding illustration Natural Images Learned bases (f1 , …, f64): “Edges” 50 100 150 200 50 250 100 300 150 350 200 400 250 50 450 300 100 500 50 100 150 200 250 300 350 400 450 500 350 150 200 400 250 450 300 500 50 100 150 200350 250 300 350 400 450 500 400 450 500 50 100 150 200 250 300 350 400 450 500 Test example 0.8 * + 0.3 * + 0.5 * x 0.8 * f + 0.3 * f + 0.5 * f 36 42 63 [a1, …, a64] = [0, 0, …, 0, 0.8, 0, …, 0, 0.3, 0, …, 0, 0.5, 0] More succinct, higher-level, (feature representation) representation.Andrew Ng More examples 0.6 * + 0.8 * + 0.4 * f f f 15 28 37 Represent as: [a15=0.6, a28=0.8, a37 = 0.4]. 1.3 * + 0.9 * + 0.3 * f f f 5 18 29 Represent as: [a5=1.3, a18=0.9, a29 = 0.3]. • Method “invents” edge detection. • Automatically learns to represent an image in terms of the edges that appear in it. Gives a more succinct, higher-level representation than the raw pixels. • Quantitatively similar to primary visual cortex (area V1) in brain. Andrew Ng Sparse coding applied to audio Image shows 20 basis functions learned from unlabeled audio. [Evan Smith & Mike Lewicki, 2006] Andrew Ng Sparse coding applied to audio Image shows 20 basis functions learned from unlabeled audio. [Evan Smith & Mike Lewicki, 2006] Andrew Ng Sparse coding applied to touch data Collect touch data using a glove, following distribution of grasps used by animals in the wild. Grasps used by animals [Macfarlane & Graziano, 2009] BiologicalExperimental Data data Distribution Example learned representationsSparse Autoencoder Sample Bases 25 20 15 10 5 Number of Neurons Number 0 -1 -0.5 0 0.5 1 Log (Excitatory/Inhibitory Area) Sparse Autoencoder Sample Bases LearningModel DistributionAlgorithm 25 Sparse RBM Sample Bases 20 15 10 Number of BasesNumber 5 0 -1 -0.5 0 0.5 1 Log (Excitatory/Inhibitory Area) PDF comparisons (p = 0.5872) 0.1 Sparse RBM Sample Bases [AndrewAndrew Saxe] Ng 0.08 ICA Sample Bases 0.06 0.04 Probability 0.02 0 -1 -0.5 0 0.5 1 Log (Excitatory/Inhibitory Area) ICA Sample Bases K-Means Sample Bases K-Means Sample Bases Learning feature hierarchies Higher layer (Combinations of edges; cf. V2) “Sparse coding” a1 a2 a3 (edges; cf. V1) Input image (pixels) x1 x2 x3 x4 [Technical details: Sparse autoencoder or sparse version of Hinton’s DBN.] [Lee, Ranganath & Ng,Andrew 2007] Ng Learning feature hierarchies Higher layer (Model V3?) Higher layer (Model V2?) Model V1 a1 a2 a3 Input image x1 x2 x3 x4 [Technical details: Sparse autoencoder or sparse version of Hinton’s DBN.] [Lee, Ranganath & Ng,Andrew 2007] Ng Hierarchical Sparse coding (Sparse DBN): Trained on face images object models object parts (combination of edges) Training set: Aligned images of faces. edges pixels [HonglakAndrew Lee] Ng Hierarchical Sparse coding (Sparse DBN) Features learned from training on different object classes. Faces Cars Elephants Chairs [HonglakAndrew Lee] Ng Machine learning applications Andrew Ng Video Activity recognition (Hollywood 2 benchmark) Method Accuracy Hessian + ESURF [Williems et al 2008] 38% Harris3D + HOG/HOF [Laptev et al 2003, 2004] 45% Cuboids + HOG/HOF [Dollar et al 2005, Laptev 2004] 46% Hessian + HOG/HOF [Laptev 2004, Williems et al 2008] 46% Dense + HOG / HOF [Laptev 2004] 47% Cuboids + HOG3D [Klaser 2008, Dollar et al 2005] 46% Unsupervised feature learning (our method) 52% Unsupervised feature learning significantly improves on the previous state-of-the-art. [Le, Zhou & Ng,Andrew 2011] Ng Sparse coding on audio (speech) Spectrogram 0.9 * + 0.7 * + 0.2 * x f36 f f 42 63 Andrew Ng Dictionary of bases fi learned for speech Many bases seem to correspond to phonemes.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    109 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us