360 likes | 500 Views
Lecture 14: Stability and Control II. Remember that our analysis is limited to linear systems although we will apply linear control to nonlinear systems sometimes successfully. Reprise of stability from last time. The idea of feedback control. Reprise.
E N D
Lecture 14: Stability and Control II Remember that our analysis is limited to linear systems although we will apply linear control to nonlinear systems sometimes successfully Reprise of stability from last time The idea of feedback control
Reprise We are dealing with holonomic systems and working with Hamilton’s equations Look at equilibria such that With an equilibrium force
Reprise We can combine the coordinates and the momentum into a state vector and write the system in terms of x Equilibrium in this setting requires Last time we worked in q, p space —
Reprise We ask what happens if we perturb q and p, but NOT Q
Reprise The coefficients on the right hand sides are constant matrices and we can write the equations in a unified matrix notation How does this work in state space?
Reprise This is a mixed notation to indicate what goes where it is not a meaningful notation in terms of the location of the indices
Reprise The matrix is constant, so the state vector has exponential solutions
Reprise We can write this symbolically as which is a polynomial in s of the same degree as the number of variables — generally twice as many as there are generalized coordinates/degrees of freedom (This is not fully general, but it will suit our current purposes.)
Reprise And the real part of s tells us about stability If Re(s) < 0 for all s, the system is asymptotically stable If Re(s) > 0 for any s, the system is unstable If Re(s) = 0 for all s, the system is marginally stable stable: if we move the system away from equilibrium, the system will go back unstable: if we move the system away from equilibrium, the error will grow (initially) exponentially marginally stable: if we move the system away from equilibrium, the error will oscillate about its reference position
OK, let’s take a break and go look at the stability of a three link robot Note that I told you some wrong things last time I was working too fast and let some stuff slip GO TO MATHEMATICA
We quit here; the remainder of this set will reappear in our next class.
Let’s think about control in the context of the simple inverted pendulum add a small, variable torque at the pivot q
There’s a change of sign from the simple pendulum from last time because I have chosen a different definition of q We have equilibrium at q = 0, and Q = 0 there as well. We know that this will be unstable if it is perturbed with Q remaining zero Let’s see how this goes in a state space representation
If q starts to increase, we feel intuitively that we ought to add a torque to cancel it We can expand the feedback term
multiply the column vector and the row vector combine the forced system into a single homogeneous system
The characteristic polynomial for this new problem can be solved for and so I can make s2 negative by applying some gain g1. So this very simple feedback can make an unstable system marginally stable We can do better . . .
Suppose we feedback the speed of the pendulum as well as the position?
Combining everything again we get And now the characteristic polynomial comes from
The linear term is the key — the feedback from the derivative We can adjust this to get any real and imaginary parts we want If you are familiar with the idea of a natural frequency and a damping ratio then you might like to set the control problem up in that language
can be made the same as the one degree of freedom mass-spring equation by setting giving The real part is always negative. If z is less than unity, there is an imaginary part. If z equals unity the system is said to be critically damped
This suggests a bunch of questions Is this generalizable to more complicated systems? YES Is there a nice ritual one can always employ? SOMETIMES Is this always possible? NO Will the linear control control the nonlinear system? SOMETIMES How much of this does it make sense to include in this course? ??
The question of possibility is really important so I’m going to address that as soon as I can develop some more notation The general perturbation problem for control will be For a single input system like the one we just saw B will be a column vector and Q a scalar and the equation is
We want Q (or Q for one input)to be proportional to x the minus sign is conventional We see that G has as many rows as there are inputs and as many columns as there are state variables G is a row vector for single input systems
Rename some dummy indices to make it possible to combine terms Our control characteristic polynomial will come from and the question is: is it always possible to find G such that the roots are where we want them? We have for the single input case
There are always at least as many gains as there are roots, so you’d think so But it isn’t. The controllability criterion, which I will state without proof, is that the rank of must be equal to the number of variables in the state There are as many terms in Q as there are variables in the state
Q has as many rows as there are variables. The number of columns in Q is equal to the number of variables times the number of inputs In the single input case Q is a square matrix AND there is a nice simple way to figure out what the gains must be for stability We are not going to explore this — we haven’t the time — and it is covered in most decent books on control theory We can get by with guided intuition.
Single input systems are much simpler than multi-input systems but we have need of multi-input systems frequently I will outline the intuitive approach to multi-input systems which works best (at least for me) through the Euler-Lagrange equations This may be a bit hard to follow; we’ll do an example next time
Euler-Lagrange equations which we can rewrite
For a steady equilibrium, which is what we are learning how to do perturbation We can drop this term because of the e2.
and we need to perturb the gradient of the Lagrangian to finish the linearization or
We can use our old method of converting to first order odes on this and decide controllability (before we knock ourselves out trying to control it) The state vector is
The A matrix is and the B matrix is