400 likes | 484 Views
Robust and Reconfigurable Flight Control by Neural Networks. Silvia Ferrari and Mark Jensenius Department of Mechanical Engineering Duke University. Infotech@Aerospace Crystal City, VA, September 28, 2005. On-line Control Identification Planning. Routing Scheduling. e.
E N D
Robust and Reconfigurable Flight Control by Neural Networks Silvia Ferrari and Mark Jensenius Department of Mechanical Engineering Duke University Infotech@Aerospace Crystal City, VA, September 28, 2005
On-line • Control • Identification • Planning Routing Scheduling ... e A Multiphase Learning Approach for Automated Reasoning Supervised Learning: Reinforcement Learning: The same performance metric is optimized during both phases!
Introduction • Stringent operational requirements introduce • Complexity • Nonlinearity • Uncertainty • Classical/neural synthesis of control systems • A-priori control knowledge • Adaptive neural networks • Dual heuristic programming adaptive critic architecture: Action network takes immediate control action Critic network evaluates the action network performance
Motivation • Sigmoidal neural networks for control: coping with complexity • Applicability to nonlinear systems • Applicability to multivariable systems • Batch and incremental training • Closed-loop stability and robustness by IQCs • Constrained training for robust adaptation on line
Modeling Linearizations Initialization Linear Control Full Envelope Control! On-line Training Design Approach
Lift Drag mg q a p r XB YB b V ZB Thrust Nonlinear Dynamical System Full-scale Aircraft Simulation: State vector: Control vector: Vector of parameters: Output vector: In particular:
k ( ) Classical Control Design Linearizations: Flight envelope and design points: (g = m = b = 0) Altitude (m) Classical linear designs: • Multivariable control (PI) • Multi-objective synthesis (LMI) Velocity (m/s)
Input-to-node variable n1 w11 1 p1 p2 . . . pq v1 d1 1 z v2 n2 2 . . . d2 b vs . . . 1 1 ns s wsq Gradient equations: ds 1 vis'(ni)wij, j = 1, ..., q One-hidden Layer Sigmoidal Neural Network Output:z = NN(p) Input: p Adjustable parameters: W,d, v Output equations: z = vTs[Wp + d] s - Hidden nodes
Known neural network.. Gradient Output Input uk = vT s[Wxk + d] u = Sv ck = BkW (ck)T = WT {v s[Wxk + d]} General Algebraic Training Approach Training set: Requirements: Output and Gradient Initialization Equations:
Assume each input-to-node variable,is a known constant Then, n is known and the initialization equations can be written as: Gradient-based Algebraic Training Linear algebraic initialization equations: where: ; to be solved for wa ; to be solved for v ~ ; to be solved for wx Vec operator n: vector of all input-to-node constants, nik c: vector of feedback gains b: output bias vector
A: (p2 3s) sparse matrix of scheduling variables S: (ps) matrix of sigmoidal functions of n X: (npns) sparse matrix evaluated from v and n k = 1, 2, ..., p where: Initialization Matrices
Comparison of Initialized PI NN and Linear Controllers Aircraft Response to Climb-Angle Command Input, at Interpolating Conditions (H0, V0) = (2Km, 95 m/s) Velocity (m/s) Linear Control Initialized Neural Network Control Large-Angle Maneuver Small-Angle Maneuver Climb Angle (deg) Time (sec)
w G(s) v Stability Analysis via Integral Quadratic Constraints (IQCs) Standard feedback interconnection between a transfer matrix G(s) or LTI system, and a causal bounded operator : IQC Stability Theorem: If there exists > 0, such that, , then the interconnection is stable. Equivalent LMI feasibility problem with positive, real parameters pi and symmetric matrix P:
BN = BV, CN = WCa Closed-loop Stability of Neural Network Controller Closed-loop system comprised of NN controller and LTI model, Lure-type System Applying the IQC Stability Theorem: is a bdd, causal diagonal operator with repeated nonlinearities that are monotonically non-decreasing, slope-restricted, and belong to [0, 0.5]. Thus, the stability of the NN controlled system is guaranteed if there exists constant symmetric matrices M, Pss that satisfy the following LMIs: i = 1, …, s
Adaptive Critic On-line Adaptation ys(t) NNC l = dV/dxa + x(t) + _ NNA _ + x(t) yc + uc u(t) NNF e a SVG xc CSG
J* V* terminal cost terminal cost t0 t tf t0 t tf Dynamic Programming Approach By The Principle of Optimality, b V*abc = Vab + V*bc c a the minimization of J can be imbedded in the minimization of V(t): Time
= NNC Target at t NNA Target at t Dual Heuristic Programming Recurrence relation [Howard, 1960] : Action network criterion (optimality condition): Critic network criterion:
NN Target Generation e + w(t) = w0 NN wl+1 = wl +Dwl wl wl+1 RProp w(t + 1) Action/Critic Network On-line Learning, at Time t The (action/critic) network must meet its target, E Network performance e Network error w Network weights Modified Resilient Backpropagation (RProp)minimizes E w.r.t. w:
Adaptive Critic Neural Control: Fixed Neural Control: Command Input: Adaptive vs. Fixed NN ControllersDuring a Coupled Maneuver Aircraft response, (H0, V0) = (2 Km, 95 m/s) Velocity (m/s) Climb Angle (deg) Roll Angle (deg) Sideslip Angle (deg) Time (sec)
Adaptive Critic Neural Control: Fixed Neural Control: Command Input: Adaptive vs. Fixed NN ControllersDuring a Large-Angle Maneuver Aircraft response, (H0, V0) = (7 Km, 160 m/s) Velocity (m/s) Climb Angle (deg) Roll Angle (deg) Sideslip Angle (deg) Time (sec)
Adaptive vs. Fixed NN Controllers During a Large-Angle Maneuver Control history, (H0, V0) = (7 Km, 160 m/s) Fixed Neural Control T (%) Adaptive Critic Neural Control S (deg) Trajectory Altitude (m) A (deg) R (deg) Time (sec) East (m) North (m)
Command Input Fixed Neural Control Fixed Neural Controller Performancein the Presence of Control Failures • Control Failures: • T = 0, 0 t 10 sec • S = 0, 5 t 10 sec • R = –34o, t 5 • R = 0, 5 t 10 sec Aircraft response, (H0, V0) = (3 Km, 100 m/s) V (m/s) (deg) Control history (deg) T (%) (deg) S (deg) (deg) A (deg) Time (sec) R (deg) Time (sec)
Adaptive vs. Fixed NN Controllersin the Presence of Control Failures Control history, (H0, V0) = (7 Km, 160 m/s) • Control Failures: • (10 t 15 sec) • Tmax = 50% • R = – 15o T (%) S (deg) Fixed Neural Control A (deg) Adaptive Critic Neural Control R (deg) Time (sec)
Adaptive Critic Neural Control: Fixed Neural Control: Command Input: Adaptive vs. Fixed NN Controllersin the Presence of Control Failures Aircraft response after t = 10 sec, (H0, V0) = (3 Km, 100 m/s) Velocity (m/s) Climb Angle (deg) Roll Angle (deg) Sideslip Angle (deg) Angle of Attack (deg) Yaw Angle (deg) Time (sec)
V M1 WR ~ ~ or xa u ~ ~ b a1 a2 a WA M2 1 Robust Adaptation: Constrained Algebraic Training
Neural Network Weights Partitioning , b, A, WA constrained weights unconstrained weights construction functions Zero Randomized Design points Hyperspherical initialization
Linear Non-adapting Neural Adapting Neural Controller Performance at Interpolation Point
Linear Non-adapting Neural Adapting Neural Controller Performance at Interpolation Point
Linear Non-adapting Neural Adapting Neural On-line Cost Optimization through Adaptation
Linear Non-adapting Neural Adapting Neural Controller Performance at Extrapolation Point
Linear Non-adapting Neural Adapting Neural Controller Performance at Extrapolation Point
Summary of Results • Properties of learning control system: • Improves global performance • Lends itself to stability and robustness analysis via IQCs • Preserves prior knowledge through constrained training • Suspends and resumes adaptation, as appropriate Future work: • Computational complexity • Aircraft system identification by neural networks • Stochastic effects • Optimal estimation Acknowledgment: This research is funded by the National Science Foundation.
Robust and Reconfigurable Flight Control by Neural Networks Silvia Ferrari Department of Mechanical Engineering Duke University Many Thanks to: Mark Jensenius
NNC P NNI CI CF, f[•] = 0 NNB SVG CB : Algebraic Initialization, : On-line Training. Proportional-Integral Neural Network Controller ys(t) + x(t) uI(t) - + yc(t) x(t) uc(t) u(t) NNF + + + uB(t) a(t) - e(t) xc(t) CSG
zB(t) NNB a Feedback Neural Network Initialization Linear optimal control law: Initialization Requirements: At each design point (k), (R1) (R2)
j = 1, 2, ..., q Network gradient: where is the lth-row of the matrix Development of Feedback Initialization Equations Network output: where and Feedback Neural Network Initialization Equations: l = 1, 2, ..., m