Physics-Based Models of Deformation

Physics-Based Models of Deformation

Physics-based models of deformation Stelian Coros 1 Over the past few weeks… • What characteristics are we looking for in a deformation model? 2 Elasticity – Definition Elasticity: The ability of a material to resist a deforming force and to return to its original size and shape when that force is removed. 3 Elasticity – Mechanisms Silicon [Si] Silicone [Si(CH3)2O]n Wikipedia Wikipedia Crystal Structure Polymer Structure 4 Modeling a simple elastic object 5 Hookean Springs in 1D 퐿 Elasticity: Ability of the spring to return to its initial length when the deforming force is removed. 푙 Spring Force: 푒푥푡 푓 • Ceiiinosssttuv. (Hooke, 1676) • Ut tensio, sic vis. (Hooke, 1678) Length of undeformed spring 퐿 Force is linear w.r.t. extension! Length of deformed spring 푙 Spring stiffness 푘 푓푒푥푡 = 푘(푙 − 퐿) Hooke’s Law 6 Hookean Springs in 1D 퐿 For elastic springs, forces are conservative, i.e., no energy is lost during deformation. Work done by forces to deform the spring: 푙 푙 푙 푒푥푡 푒푥푡 푓 푊 = ׬퐿 푓 푥 푑푥 = ׬퐿 푘 푥 − 퐿 푑푥 풙 Stored energy of the spring is 1 Length of undeformed spring 퐿 퐸 = 푊 = 푘 푙 − 퐿 2 Length of deformed spring 푙 2 Spring stiffness 푘 Force 푓푖푛푡 exerted by spring follows as negative gradient of 퐸 (how come?): 푑퐸 푓푖푛푡 = − = −푘(푙 − 퐿) 푑푥 7 Hookean Springs – Generalization • Inconvenience: springs with same material but different lengths will have different stiffness characteristics A spring with rest length 퐿1 = 퐿 deforms by 훿푙 = 퐿/2 A spring with rest length 퐿2 = 100퐿 deforms by the same 훿푙 Same stiffness k, same internal force according to Hook’s law. Which one feels stiffer? Note relative deformation is VERY different! 푙−퐿 • Fix: use relative deformation 휀 = and material stiffness 푘෨. Then energy 퐿 1 density is Ψ = 푘෨휀2, and total stored energy is an integrated quantity: 2 1 퐸 = 푘෨휀2퐿 2 8 Hookean Springs in 퐑푛 퐿 The configuration of a spring is determined by the position of its two endpoints. We will distinguish 풙ഥ 풙ഥ between 1 2 푛 • Deformed positions 풙1, 풙2 ∈ 푅 푙 푛 푓푒푥푡 • Undeformed positions 풙ഥ1, 풙ഥ2 ∈ 푅 풙1 풙2 Lengths are functions of positions, i.e., Length of undeformed spring 퐿 푙 = 풙2 − 풙1 2 and 퐿 = 풙ഥ2 − 풙ഥ1 2 Length of deformed spring 푙 Spring stiffness 푘 Everything else stays the same 9 Hookean Springs in nD 퐿 푙 • 1 mass point, 1 spring 푓푒푥푡 풙ഥ1 풙ഥ2 풙1 풙2 Deformation Measure Elastic Energy Forces 푙 1 휕퐸 풙 휀 = − 1 퐸 = 푘෨휀2퐿 풇 = − 퐿 2 푖푛푡 휕풙 Working it out… 휕퐸 풙1, 풙2 휕퐸 풙1, 풙2 휕푙 1 풇1 = − = − 푇 2 휕풙 휕푙 휕풙 푙 = 풆 2 = 풆 풆 1 1 ෨ 푙 풙2−풙1 휕퐸 풇1 = −푘( − 1) with 풆 = 풙 − 풙 = 푘෨휀 퐿 풙2−풙1 2 1 휕푙 1 풇1 = −풇2 휕푙 1 − 휕 풆푇풆 푇 2 = 풆 풆 10 휕풙1 2 휕풙1 Modeling complex elastic objects 11 Modeling complex elastic objects Straightforward concept: sample object with mass points, connect them with springs… 12 Modeling complex elastic objects Straightforward concept: sample object with mass points, connect them with springs… Representation: 2D triangle mesh (or 3D tetrahedral mesh, of course) 2 • Vertices 퐱푖 ∈ 퐑 • Edges 퐸푖푗 are springs connecting vertices 퐱푖 and 퐱푗 13 “Mass-spring” systems in the wild 14 Spring Networks Energy of spring network 풙 풙 퐸 = σ푘 퐸푘 푙 푗 퐿풊푗, 푘푖푗 Total spring force at given node 퐿풊푙, 푘푖푙 풙 푖푛푡 휕퐸 휕퐸푘 푖 풇푖 = − = − σ푘 휕풙푖 휕풙푖 퐿풊푘, 푘푖푘 Total force at given node 풙 푖푛푡 푒푥푡 푘 풇푖 = 풇푖 + 풇푖 16 Force equilibrium Given applied forces 풇푒푥푡, how to compute resulting configuration 풙? For static equilibrium, the acceleration has to be zero for all nodes, 풂풊 풙 = ퟎ ∀푖 From Newton’s second law, we know that 풇푖 풙 = 푚푖풂푖 풙 = ퟎ Static Equilibrium Conditions 푖푛푡 푒푥푡 풇푖 풙 + 풇푖 = ퟎ ∀푖 17 Equilibrium Conditions – Energy If internal and external forces derive from potential fields, 푖푛푡 푒푥푡 푖푛푡 휕퐸 푒푥푡 휕퐸 풇푖 = − 풇푖 = − 휕풙푖 휕풙푖 then, static equilibrium conditions 푖푛푡 푒푥푡 풇푖 풙 + 풇푖 = ퟎ ∀푖 are equivalent to 풙 being a stationary point for the total energy 휕퐸 풙 퐸 풙 = 퐸푖푛푡(풙) + 퐸푒푥푡(풙), i.e., = ퟎ 휕풙 18 Problem Statement Task: find minimizer 풙∗ of function 퐸(풙), ∗ ∗ 풙 = argmin푥 퐸 풙 → 퐸 풙 ≤ 퐸 풙 ∀ 풙 In general: • 퐸(풙) is a nonlinear function • 퐸(풙) is multivariate, i.e., 풙 ∈ 푅푛 with 푛 ≥ 2 • 퐸(풙) may have local minima and maxima (numerical artifacts or expected behavior of physical system)? 19 Local vs. Global Minima Global minimum is absolute best among all possibilities Local minimum is best “among immediate neighbors” local minima global minimum Somewhat philosophical question: does a local minimum “solve” the problem? Depends on the problem! Solution Strategy Given a point 풙0 how do we get to a minimum? • “Walk” into a direction that decreases 퐸(풙) • Which direction would you choose? 풙푛+1 = 풙푛 − 훼 ∇퐸 풙푛 21 The derivative 풙0 풙1 ∇퐸 풙0 ∇퐸 풙1 Directional Derivatives and the Gradient In 1D: derivative == slope == rise over run In higher dimensions: take a slice through this function along some direction • Then apply the usual derivative concept • This is called the directional derivative Directional Derivatives and the Gradient Starting from Taylor’s series easy to see that Directional Derivatives and the Gradient Given a multivariate function 푓 풙 , gradient assigns a vector 훻푓 풙 at each point Inner product between gradient and any unit vector gives directional derivative “along that direction” • Another way to think about it is that inner product outputs the component of the gradient along a unit vector… Out of all possible unit vectors, what is the one along which the function changes most drastically? Gradient in coordinates Most familiar definition: list of partial derivatives Gradient is direction of steepest ascent Function value • gets largest if we move in direction of gradient • doesn’t change if we move orthogonally • decreases fastest if we move exactly in opposite direction Steepest Descent – Algorithm I Algorithm: steepest_descent Input: 푥 //initial guess 훼 //step length 휀 //threshold while abs(∇퐸(푥)) > 휀 do 푥 = 푥 − 훼∇퐸(푥); end do; 28 Gradient Descent (1D) • Basic idea: follow the gradient “downhill” until it’s zero • (Zero gradient is the 1st-order optimality condition) Steepest Descent – Observations • Progress is initially good, but slows down when approaching minimum • Number of steps to convergence depends on step length 훼 • Too small alpha leads to slow progress • Too large alpha leads to divergence 30 Gradient Descent Algorithm (1D) Steepest Descent – Observations • Progress is initially good, but slows down when approaching minimum • Number of steps to convergence depends on step length 훼 • Too small alpha leads to slow progress • Too large alpha leads to divergence How can we improve this basic steepest descent algorithm? 32 Steepest Descent – Improvements • Idea: enforce monotonicity, i.e., 퐸 푥푛+1 < 퐸 푥푛 ∀ 푛. • Goal: in each step, find 훼 such that (I) 퐸 푥푛 − 훼∇퐸(푥푛) < 퐸 푥푛 • From Taylor series, we know that ∃ 훼 > 0 such that (I) holds 푓 푛 푥 e.g. 푓 푥 + 푑푥 = σ∞ 푑푥푛 푖=0 푛! = 푓 푥 + 푓’ 푥 푑푥 + 푂 푑푥2 ≈ 푓 푥 + 푓’ 푥 푑푥 if 푑푥 is sufficiently small • Approach: reduce 훼 until condition (I) is satisfied 33 Line Search Algorithm: line_search 푥 current state Input: 푥, 푑푥, 훼, 훽 푑푥 search direction while 퐸 푥 − 훼 ∗ 푑푥 > 퐸(푥) do 훼 initial step length 훼 = 훼 ∗ 훽; 0 < 훽 < 1 scaling factor end do; Algorithm: steepest_descent Input: 푥, 푑푥, 훼, 훽, 휀 while abs(∇퐸(푥)) > 휀 do 푑푥 = ∇퐸(푥); 훼 = line_search(푥, 푑푥, 훼, 훽); 푥 = 푥 − 훼 푑푥; end do; 34 Gradient Descent Algorithm (nD) Basic challenge in nD: - solution can “oscillate” - takes many, many small steps - very slow to converge How can we improve on this? Newton’s method Alternative approach: • Find 푑푥 such that ∇퐸 푥푛 + 푑푥 = 0 • Taylor series on gradient, to first order 2 ∇퐸 푥푛 + 푑푥 = ∇퐸 푥푛 + ∇ 퐸 푥푛 푑푥 = 0 2 −1 푑푥 = −∇ 퐸 푥푛 ∇퐸 푥푛 36 The Hessian visualized Gradient: linear approximation Hessian: quadratic approximation Newton’s Method Algorithm: steepest_descent Algorithm: newton Input: 푥, 훼, 훽, 휀 Input: 푥, 훼, 훽, 휀 while abs(∇퐸(푥)) > 휀 do while abs(∇퐸(푥)) > 휀 do 푑푥 = −∇퐸(푥); 푑푥 = −∇2퐸−1 x ∗ ∇퐸(푥); 훼 = line_search(푥, 푑푥, 훼, 훽); 훼 = line_search(푥, 푑푥, 훼, 훽); 푥 = 푥 + 훼 푑푥; 푥 = 푥 + 훼푑푥; end do; end do; 38 Observations & Interpretations • Newton’s method converges much faster than Steepest Descent Newton’s method: coordinate Gradient Descent transformation that makes the energy landscape look like a “round bowl”40 Spring Networks - Hessian • Given 풙, we can compute 퐸푖푛푡, 퐸푒푥푡 and 휕퐸/휕풙. → Sufficient for gradient descent. 휕2퐸 • For NM, we also need second derivatives, i.e., the Hessian 퐇 = 휕풙2 • Hessian for given spring 2 푘 푘 휕 퐸 퐇푖푗 = 휕풙풊휕풙풊 41 Forces & Force Jacobians 휕2퐸 휕퐹 • So, what does 퐇 = = − represent? 휕풙2 휕풙 • Much more intuitive to think in terms of blocks of the Jacobian: 휕퐹푖 휕풙풋 • “how does the force on particle i change when the position of particle j changes” 42 Forces & Force Jacobians • How do we compute the force Jacobian? • Analytic formulas • Numerical approach • Finite Differences, very useful for prototyping/debugging • Automatic & Symbolic differentiation • e.g. Maple 44 Gradients of Matrix-Valued Expressions EXTREMELY useful to be able to differentiate matrix-valued expressions! At least once in your life, work these out meticulously in coordinates! Checking Derivatives How do you know if your derivatives are correct? • Compare analytical derivatives against numerical derivatives computed with finite differences.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    121 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us