
Back-Propagation and Networks for Recognition COMPSCI 527 — Computer Vision COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 1 / 26 Outline 1 Loss and Risk 2 Back-Propagation 3 Convolutional Neural Networks 4 AlexNet 5 The State of the Art of Image Classification COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 2 / 26 Loss and Risk Training is Empirical Risk Minimization K • Classifier network: p = h(x; w) 2 R , then y^ = arg max p • Define a loss `(y; y^): How much do we pay when the true label is y and the network says y^? • Risk is average loss over training set T = f(x1; y1);::: (xN ; yN )g: 1 PN LT (w) = N n=1 `n(w) with `n(w) = `(yn; arg maxk h(xn; w)) • Determine network weights w that yield a low risk LT (w) m over w 2 R • Use Stochastic Gradient Descent because m is large and LT (w) is a sum of many terms • Two large numbers: m and N • We need rLB(w) and therefore r`n(w) over mini-batches B COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 3 / 26 Loss and Risk The 0-1 Loss is Useless • Example: K = 5 classes, scores p = h(xn; w) as in figure • True label yn = 2, predicted label y^n = 0 because p0 > pyn = p2. Therefore, the loss is 1 p 0 p 2 k 0 1 2 3 4 • Changing w by an inifinitesimal amount may reduce but not close the gap between p0 and p2: loss stays 1 • @`0-1 That is, r`n(w) = @w = 0 • Gradient provides no information towards reducing the gap! COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 4 / 26 Loss and Risk The Cross-Entropy Loss • We need to compute the loss on the score p, not on prediction y^n • Use cross-entropy loss on the score p as a proxy loss `(y; p) = − log py • Unbounded loss for total misclassification • Differentiable, nonzero gradient everywhere • Meshes well with the soft-max 0 0 1 COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 5 / 26 Loss and Risk Example, Continued K • Last layer before soft-max has activations z 2 R • ezk 5 Soft-max has output p = σ(z) with pk = P4 zj 2 R j=0 e • P4 pk > 0 for all k and k=0 pk = 1 • Ideally, if the correct class is y = 2, we would like output p to equal q = [0; 0; 1; 0; 0], the one-hot encoding of y • That is, qy = q2 = 1 and all other qj are zero • `(y; p) = − log py = − log p2 • When p approaches q we have py ! 1 and `(y; p) ! 0 • When p is far from q we have py ! 0 and `(y; p) ! 1 COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 6 / 26 Loss and Risk Example, Continued • Cross-entropy loss meshes well with soft-max 15 0 -10 10 zy 4 • e P zj `(y; p) = − log py = − log P4 zj = log( j=0 e ) − zy j=0 e 0 • When zy zy 0 for all y 6= y we have P4 zj zy log( j=0 e ) ≈ log e = zy so that `(y; p) ! 0 0 4 z • 0 P j When zy zy for some y 6= y we have log( j=0 e ) ≈ c (c effectively independent of zy ) so that `(y; p) ! −∞ linearly (Actual plot depends on all values in z) • This is a “soft hinge loss” in z (not in p) COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 7 / 26 Back-Propagation Back-Propagation • We need rLB(w) over some mini-batch B and therefore T @`n @`n @`n r`n(w) = = ;:::; for a network with J layers @w @w1 @wJ (0) (1) (2) (3) xn = x x x x = p l n h(1) h(2) h(3) l (1) (2) (3) w w w yn • Computations from xn to `n form a chain: use the chain rule! (j) • Derivatives of `n w.r.t. layer j or before go through x (j) @`n @`n @x @w(j) = @x(j) @w(j) (j) @`n @`n @x @x(j−1) = @x(j) @x(j−1) (recursion!) • @`n @` Start: @x(J) = @p COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 8 / 26 Back-Propagation Local Jacobians (0) (1) (2) (j) (3) (j) xn = x x x @x x = p @x l n • (1) (2) (3) Local computationsh ath layer j: h@w(j) and l@x(j−1) • Partial derivatives of h(j) with respect to layer weights and (1) (2) (3) input to the layerw w w yn • Local Jacobian matrices, can compute by knowing what the layer does • The start of the process can be computed from knowing the @`n @` loss function, @x(J) = @p • Another local Jacobian • The rest is going recursively from output to input, one layer @`n @`n at a time, accumulating @w(j) into a vector @w COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 9 / 26 Back-Propagation Back-Propagation Spelled Out for J = 3 (0) (1) (2) (3) xn = x x x x = p l n h(1) h(2) h(3) l @`n @` 2 @`n 3 @x(3) = @p @w(1) (3) @`n @`n @x 6 7 = (1) (2) (3) @w(3) @x(3) @ww(3) w w @`n 6 @`ynn 7 (3) = 6 (2) 7 @`n = @`n @x @w 6 @w 7 @x(2) @x(3) @x(2) 4 5 @` @` @x(2) n = n @`n @w(2) @x(2) @w(2) (3) (2) @w @`n @`n @x @x(1) = @x(2) @x(1) (1) @`n @`n @x (Jacobians in blue are local) @w(1) = @x(1) @w(1) (1) @`n @`n @x @x(0) = @x(1) @x(0) COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 10 / 26 Back-Propagation A Google Deep Dream Image • Train a network to recognize animals (yields w) • Set x0 = random noise image, y = dog • Minimize `(y; h(x0)) rather than LT (w) COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 11 / 26 Convolutional Neural Networks Convolutional Layers d e • A layer with input x 2 R and output y 2 R has e neurons, each with d gains and one bias • Total of (d + 1)e weights to be trained in a single layer • For images, d; e are in the order of hundreds of thousands or even millions • Too many parameters • Convolutional layers are layers restricted in a special way • Many fewer parameters to train • Also some justification in terms of heuristic principles (see notes) COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 12 / 26 Convolutional Neural Networks A Convolutional Layer • Convolution + bias: a = x ∗ v + b v v • Example: 3 × 4 input image x, 2 × 2 kernel v = 00 01 v10 v11 [“Same” style convolution] v11 v10 v11 v10 v11 v10 v11 v10 v01 v00 v01 v00 v01 v00 v01 v00 v11 v10 v11 v10 v11 v10 v11 v10 v01 v00 v01 v00 v01 v00 v01 v00 v11 v10 v11 v10 v11 v10 v11 v10 v01 v00 v01 v00 v01 v00 v01 v00 • Do you want to see this as one convolution with v (plus bias) or as 12 neurons with the same weights? COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 13 / 26 Convolutional Neural Networks “Local” Neurons • Neurons are now “local” • Just means that many coefficients are zero: 0 0 0 0 v v 0 0 00 01 0 0 v10 v11 • If a neuron is viewed as being connected to all input pixels, then the 12 neurons share their nonzero weights • So a convolutional layer is the same as a fully-connected layer where each neuron has many weights clamped to zero, and the remaining weights are shared across neurons COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 14 / 26 Convolutional Neural Networks There is Still a Gain Matrix 0 1 2 3 4 0 1 2 3 0 0 v v 0 1 v v10 00 01 11 0 a v v 1 12 2 v01 v00 0 10 11 2 3 0 0 0 0 0 x v a • Neuron number 6 (starting at 0): a12 = v11x12 + v10x13 + v01x22 + v00x23 + b • Activation number six a12 = V [6; :] x where T x = (x00; x01; x02; x03; x10; x11; x12; x13; x20; x21; x22; x23) V [6; :] = (0; 0; 0; 0; 0; 0; v11; v10; 0; 0; v01; v00) COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 15 / 26 Convolutional Neural Networks Gain Matrix for a Convolutional Layer a = x ∗ v + b = V x + b 2 a00 3 2 v11 v10 0 0 v01 v00 0 0 0 0 0 0 3 2 x00 3 a 0 v v 0 0 v v 0 0 0 0 0 x 6 01 7 6 11 10 01 00 7 6 01 7 6 a 7 6 0 0 v v 0 0 v v 0 0 0 0 7 6 x 7 6 02 7 6 11 10 01 00 7 6 02 7 6 a 7 6 0 0 0 v v 0 0 v v 0 0 0 7 6 x 7 6 03 7 6 11 10 01 00 7 6 03 7 6 a 7 6 0 0 0 0 v v 0 0 v v 0 0 7 6 x 7 6 10 7 6 11 10 01 00 7 6 10 7 6 a 7 6 0 0 0 0 0 v v 0 0 v v 0 7 6 x 7 6 11 7 = 6 11 10 01 00 7 6 11 7 + b 6 a 7 6 0 0 0 0 0 0 v v 0 0 v v 7 6 x 7 6 12 7 6 11 10 01 00 7 6 12 7 6 a13 7 6 0 0 0 0 0 0 0 v11 v10 0 0 v01 7 6 x13 7 6 7 6 7 6 7 6 a20 7 6 0 0 0 0 0 0 0 0 v11 v10 0 0 7 6 x20 7 6 7 6 7 6 7 6 a21 7 6 0 0 0 0 0 0 0 0 0 v11 v10 0 7 6 x21 7 4 5 4 5 4 5 a22 0 0 0 0 0 0 0 0 0 0 v11 v10 x22 a23 00000000000 v11 x23 • A “regular” layer with many zeros and shared weights [Boundary neurons have fewer nonzero weights] • Zeros cannot be changed during training COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 16 / 26 Convolutional Neural Networks Stride • Activation aij is often similar to ai;j+1 and ai+1;j • Images often vary slowly over space • Activations are redundant • Reduce the redundancy by computing convolutions with a stride sm greater than one • Only compute every sm output values in dimension m • Output size shrinks from d1 × d2 to about d1=s1 × d2=s2 • Typically sm = s (same stride in all dimensions) • Layers get smaller and smaller because of stride • Multiscale image analysis, efficiency COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 17 / 26 Convolutional Neural Networks Max Pooling • Another way to reduce output resolution is max pooling • This is a layer of its own, separate from convolution • Consider k × k windows with stride s • Often s = k (adjacent, non-overlapping windows) • For each window, output the maximum value • Output is about d1=s × d2=s • Returns highest response in window, rather than the response in a fixed position • More expensive than strided convolution because the entire convolution needs to be computed before the max is found in each window • No longer very popular COMPSCI 527 — Computer Vision Back-Propagation and Networks for Recognition 18 / 26 AlexNet The Input Layer of AlexNet • AlexNet circa 2012, classifies color images into one of 1000 categories • Trained on ImageNet, a large database with millions of
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages26 Page
-
File Size-