This file has been cleaned of potential threats.
If you confirm that the file is coming from a trusted source, you can send the following SHA-256 hash value to your admin for the original file.
7eef221cf251da4fcb85541a93e212d3690847275f2d7bd8ddf91455f07a7c31
To view the reconstructed contents, please SCROLL DOWN to next page. Lecture 6- Optimal Control Systems Ref. 2: Chapters 1&2
Dr. Lamiaa M. Elshenawy
Email: [email protected] [email protected] Website: http://mu.menofia.edu.eg/lmyaa_alshnawy/StaffDetails/1/ar OUTLINES
Historical Tour Modern Control Theory Vs. Conventional Control Theory Optimal Control System
2 HISTORICAL TOUR
1. Johannes Bernoulli in 1699 posed the problem “finding the path of quickest descent between two points not in the same horizontal or vertical line”. 2. Leonhard Euler in 1750 solved the problem and is founder of calculus of variations in 1766. 3. Joseph-Louis Lagrange in 1755 found necessary condition “Euler - Lagrange equation”. 4. Andrien Marie Legendre in 1786 found sufficient condition. 5. Carl Gustav Jacob Jacobi in 1836 came up with a more rigorous analysis of the sufficient conditions “Legendre-Jacobi condition”.
Johannes Bernoulli (1667-1748): a Swiss mathematician Leonhard Euler (1707 – 1783): a Swiss mathematician, physicist, astronomer and engineer Andrien Marie Legendre (1752-1833): a French mathematician Carl Gustav Jacob Jacobi (1804-1851): a German mathematician
3 HISTORICAL TOUR
6. William Rowan Hamilton in 1838 did some remarkable work on mechanics, motion of a particle in space “Hamilton-Jacobi equation”. 7. N. Wiener developed optimal control for weapon fire during World War II (1940-1945) . 8. L. S. Pontryagin in 1956 presented “maximum principle” based on work of Hamilton function. 9. R. Bellman in 1957 introduced “dynamic programming” to solve discrete time optimal control problems “Hamilton-Jacobi-Bellman approach”. 10. R. E. Kalman in 1960 provided linear quadratic regulator (LQR) and linear quadratic Gaussian (LQG) theory to design optimal feedback controls based on work of Pontryagin. William Rowan Hamilton (1788-1856) an Irish mathematician N. Wiener (1894 – 1964) an American mathematician and philosopher Richard Ernest Bellman(1920-1984) an American applied mathematician Lev Semyonovich Pontryagin (1908 – 1988): a Soviet mathematician Rudolf Emil Kalman (1930-2016) a Hungarian-born American electrical engineer, mathematician, and inventor. 4 MODERN CONT. VS CONVENTIONAL CONT.
Conventional Control Modern Control Frequency domain approach Time domain approach
Based on Laplace transforms Based on state variable theory representation
Applicable to SISO, LTI systems Applicable to SISO/MIMO – only linear/nonlinear, time-invariant/ time-varying Initial conditions =Zero Initial conditions ≠ Zero
5 MODERN CONT. VS CONVENTIONAL CONT.
푟 + 푒 푢 푦 퐺 (푠) 퐺(푠) - 푐
푆푒푛푠표푟 풀(풔)= 푮(풔)푮풄(풔) 푹(풔) ퟏ+푮(풔)푮풄(풔)푹(풔)
푟 + 푢 푥ሶ = 퐴푥 + 퐵푢 푦 - 푦 = 퐶푥 + 퐷푢 푥 퐾 6 IMPORTANT IDIOMS
Calculus of Variations (CoV): is the branch of mathematics concerning to find maxima and minima of functionals. Functionals: function of a function. Let 퐽 is a functional dependent on a function 푓 푥 ; 퐽 = 푉 푓 푥 ; 푉 are often expressed as definite integrals. Quadratic Form: is a special nonlinear function having only second-order terms (either the square of a variable or the product of two variables). 푛 푛 푇 푓 푥 = σ푖=1 σ푗=1 푝푖푗 푥푖 푥푗 = 푥 푃푥 푃 = [푝푖푗]푛×푛푚푎푡푟푖푥 OPTIMAL CONTROL SYSTEM
Optimal Control System
Static Dynamic
Plants under dynamic conditions Plants under steady state conditions Differential/difference equations Algebraic equations Calculus of variations, Pontryagin Calculus, Lagrange multipliers principle, dynamic programming/search Linear/nonlinear programming techniques OPTIMAL CONTROL SYSTEM
Performance Index
Classical control Modern control design design
Time response (rise time, settling time, Find 푢∗ peak overshoot, steady state accuracy) Reach a target (follow trajectory) Frequency response (gain/phase margin, Extremize performance index bandwidth) OPTIMAL CONTROL SYSTEM
Optimal Control System
Plant Performance Index Constraints
풙ሶ = 푨퐱 + 푩풖 풕풇 푼− ≤ 풖(풕) ≤ 푼+ 풚 = 푪퐱 + 푫풖 푱 = න[퐱푻푸퐱 + 풖푻푹풖]풅풕 퐱− ≤ 퐱(풕) ≤ 퐱+ 풕ퟎ
푹 is positive definite matrix 푸 is positive semidefinite matrix OPTIMAL CONTROL SYSTEM THEOREM 8 Euler-Lagrange Multiplier Theorem 풕 Minimize cost function 푱 = 풇[퐱푻푸퐱 + 풖푻푹풖]풅풕 풕ퟎ subject to 퐱ሶ = 푨퐱 + 푩풖 If 푽 퐱, 퐱ሶ, 퐮, 풕 = 퐱푻푸퐱 + 풖푻푹풖, 풉 퐱, 퐱ሶ, 퐮, 풕 =퐱ሶ − 푨퐱 − 푩풖=0 푳 퐱, 퐱ሶ, 풖, 풕 = 푽 퐱, 퐱ሶ, 풖, 풕 + λ푻풉 퐱, 퐱ሶ, 풖, 풕 and x∗, u∗is a local minimum, then 흏푳 풅 흏푳 흏푳 풅 흏푳 흏푳 풅 흏푳 − = ퟎ; − = ퟎ; − = ퟎ ; 퐱 ퟎ 퐠퐢퐯퐞퐧 흏퐱∗ 풅풕 흏퐱ሶ ∗ 흏풖∗ 풅풕 흏풖ሶ ∗ 흏λ∗ 풅풕 흏λሶ ∗ (Necessary condition)
Leonhard Euler (1707 – 1783): a Swiss mathematician, physicist, astronomer and engineer OPTIMAL CONTROL SYSTEM
Example 14: Minimize ퟏ 푱 = න[풙ퟐ 풕 + 풖ퟐ(풕)]풅풕 ퟎ Subject to plant equation 풙ሶ 풕 = −풙 풕 + 풖(풕) with boundary conditions 풙 ퟎ = ퟏ; 풙 ퟏ = ퟎ Solution
푳 = 풙ퟐ 풕 + 풖ퟐ 풕 + λ(풙ሶ 풕 + 풙 풕 − 풖 풕 )
흏푳 풅 흏푳 − = ퟐ풙 풕 + λ − λሶ = ퟎ (ퟏ) 흏풙 풅풕 흏풙ሶ OPTIMAL CONTROL SYSTEM
흏푳 풅 흏푳 − = ퟐ풖 풕 − λ = ퟎ (ퟐ) 흏풖 풅풕 흏풖ሶ 흏푳 풅 흏푳 − = 풙ሶ 풕 + 풙 풕 − 풖 풕 = ퟎ (ퟑ) 흏λ 풅풕 흏λሶ From (2) and (3), λ∗(t)=2u(t)=2 풙ሶ 풕 + ퟐ풙 풕 C푎푛푑푖푑푎푡푒 From (1), 풙ሷ 풕 − ퟐ풙 풕 = ퟎ 표푝푡푖푚푢푚 design
∗ −√ퟐ풕 √ퟐ풕 풙 풕 = 푪ퟏ풆 + 푪ퟐ풆 ∗ − ퟐ풕 √ퟐ풕 풖 풕 = 푪ퟏ(ퟏ − √ퟐ)풆 + 푪ퟐ(ퟏ + √ퟐ)풆 ퟏ ퟏ 푪ퟏ = , 푪ퟐ = ퟏ − 풆−ퟐ ퟐ ퟏ − 풆ퟐ ퟐ OPTIMAL CONTROL SYSTEM
THEOREM 9 Pontryagin Maximum Principle 풕 Minimize cost function 푱 = 풇[퐱푻푸퐱 + 풖푻푹풖]풅풕 풕ퟎ subject to 풇 = 퐱ሶ = 푨퐱 + 푩풖 If 푽 퐱, 퐮, 풕 = 퐱푻푸퐱 + 풖푻푹풖, 푯 퐱, 풖, 풕 = 푽 퐱, 풖, 풕 + λ푻 풇 and x∗, u∗is a local minimum, then 흏푯 ሶ 흏푯 흏푯 ∗ = −λ풊; ∗ = 풙ሶ 풊; ∗ = ퟎ; 퐱 ퟎ 퐠퐢퐯퐞퐧 흏풙풊 흏λ풊 흏풖 (Necessary condition) OPTIMAL CONTROL SYSTEM
Example 15: Minimize using maximum principle ퟐ ퟏ 푱 = න 풖ퟐ(풕)풅풕 ퟐ Subject to plant equation ퟎ 풙ሶ ퟏ 풕 = 풙ퟐ 풕 , 풙ሶ ퟐ 풕 = 풖 풕 with boundary conditions 퐱 ퟎ = [ퟏ ퟐ]푻; 퐱 ퟐ = [ퟏ ퟎ]푻
Solution ퟏ 푽 = 풖ퟐ(풕), 풇 = 푥 푡 , 풇 = 푢 푡 ퟐ ퟏ 2 ퟐ ퟏ ퟏ 푯 = 풖ퟐ 풕 + λ 풇 + λ 풇 = 풖ퟐ 풕 + λ 풙 풕 + λ 풖 풕 (1) ퟐ ퟏ ퟏ ퟐ ퟐ ퟐ ퟏ ퟐ ퟐ 흏푯 = 풖 풕 + λ 풕 = ퟎ; 풖 풕 = −λ 풕 (ퟐ) 흏풖 ퟐ ퟐ OPTIMAL CONTROL SYSTEM
ퟏ From (1) and (3), 퐇 = λ ퟐ 풕 + λ 풙 풕 − λ ퟐ 풕 ퟐ ퟐ ퟏ ퟐ ퟐ 흏푯 풙ሶ ퟏ 풕 = = 풙ퟐ 풕 (ퟑ) 흏λퟏ 흏푯 풙ሶ ퟐ 풕 = = −λퟐ 풕 (ퟒ) 흏λퟐ 흏푯 ሶ λퟏ 풕 = − = ퟎ (ퟓ) 흏풙ퟏ 흏푯 ሶ λퟐ 풕 = − = −λퟏ 풕 (ퟔ) 흏풙ퟐ OPTIMAL CONTROL SYSTEM
From (5) and (6) λퟏ 풕 = 푪ퟏ, λퟐ 풕 = −푪ퟏ풕 + 푪ퟐ 푪 풙 풕 = ퟏ 풕ퟐ − 푪 풕 + 푪 ퟐ ퟐ ퟐ ퟑ 푪 푪 풙 풕 = ퟏ 풕ퟑ − ퟐ 풕ퟐ + 푪 풕 + 푪 ퟏ ퟔ ퟐ ퟑ ퟒ C푎푛푑푖푑푎푡푒 푭풓풐풎 풃풐풖풏풅풂풓풚 풄풐풏풅풊풕풊풐풏풔 표푝푡푖푚푢푚 design 푪ퟒ=1, 푪ퟑ=2, 푪ퟐ=4 and 푪ퟏ=3 ퟏ 풙 ∗ 풕 = 풕ퟑ − ퟐ풕ퟐ + ퟐ풕 + ퟏ ퟏ ퟐ ퟑ 풙 ∗ 풕 = 풕ퟐ − ퟒ풕 + ퟐ ퟐ ퟐ ∗ ∗ λퟏ 풕 = ퟑ, λퟐ = −ퟑ풕 + ퟒ 풖∗ 풕 = ퟑ풕 − ퟒ OPTIMAL CONTROL SYSTEM
2 x (t) 1
1
x (t) 0 2
-1 u(t)
-2
-3
-4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 t Thank you for your attention