440 likes | 575 Views
Minimum Norm State-Space Identification for Discrete-Time Systems. Zachary Feinstein Advisor: Professor Jr-Shin Li. Agenda. Goals Motivation Procedure Application Future Work. Goals. Find a linear realization of the form: To solve:. Goals.
E N D
Minimum Norm State-Space Identification for Discrete-Time Systems Zachary Feinstein Advisor: Professor Jr-Shin Li
Agenda • Goals • Motivation • Procedure • Application • Future Work
Goals • Find a linear realization of the form: • To solve:
Goals • In the case of output-data only, create realization of the form: • This is called historically-driven system
Motivating Problem • Wanted to find a constant linear realization to approximate financial data • Use for 1-step Kalman Predictor on historically-driven system:
Motivating Problem • The specific problem being addressed initially was analysis of the credit market • Try to do noise reduction and prediction of default rates
Motivation • Why do we need a new technique? • Financial Data does not follow any clear frequency response • Cannot use any identification technique that finds peaks of transfer function (e.g. ERA or FDD)
Procedure: Agenda • Background • Find Weighting Pattern • Find Updated Realization • Find Optimal Delta Value • Discussion of Output-Only Case
Procedure: Background • Let A, B, C have elements which lie in the complex plane. • Let p = length of output vector y(k) • Let n = length of state vector x(k) • Let m = length of input vector u(k)
Procedure: Background • For simplification assume x0 = 0 • Want to solve: • Remove all points at the beginning such that u(k) = 0 for all k = {0,…,M}
Procedure: Find Weighting Pattern • Discrete time weighting pattern: • We can write:
Procedure: Find Weighting Pattern • Our minimization problem can now be rewritten as: • Want to solve for optimal Fk for all k
Procedure: Find Weighting Pattern • Want an iterative approach • Since each norm in the sum only has Fl for l ≤ k we can solve find such a formula • Solving each as a minimum norm problem:
Procedure: Find Realization • Given that we have weighting pattern • Now we have an objective function: • Again want an iterative approach to solve
Procedure: Find Realization • Would use Convex Combination of previous best solution and optimal case for next norm:
Procedure: Find Realization • For the kth update solving for minimum norm: • These values solve:
Procedure: Find Realization • Choose to update the matrices in the order:
Procedure: Find Realization • This update order was chosen since: • Let C be a constant then from F0 we can find optimal B • Using this optimal B and C then use F1 we can find optimal A • Logical to update C next
Procedure: Find Optimal Delta • Want to solve for the optimal delta values such that:
Procedure: Find Optimal Delta • First discuss how to solve for δB • Then discuss δC since it is similar to δB • Finally, discuss δA because this case has higher order terms
Procedure: Find Optimal δB • For simplification rewrite optimization problem to be: • Through use of counterexample, it can be seen that δB ≥ 0
Procedure: Find Optimal δB • Using property of norms, mainly the triangle inequality
Procedure: Find Optimal δB • Using these inequalities it can be seen that:
Procedure: Find Optimal δB • Therefore we can find upper and lower bounds for δB:
Procedure: Find Optimal δB • Using these bounds use a search algorithm to find optimal δB • Evaluate at 2 endpoints and 2 interior points • If value at endpoint is smallest recursively call again with new endpoints of that endpoint and the nearest interior point • Otherwise choose the 2 points surrounding that minimum as the new endpoints and call recursively • Terminate if interval is below some threshold
Procedure: Find Optimal δC • Analogous to δB • Rewrite the objective function as: • Can use same properties to find an upper-bound on this objective function
Procedure: Find Optimal δC • We can use same properties as before to find bounds on δC: • Therefore we can use the same search algorithm as in the δB case to find the optimal δC
Procedure: Find Optimal δA • To simplify we first want to find a linear approximation in δA for: • Using knowledge of exponentials, we can say:
Procedure: Find Optimal δA • Using this linear approximation, we can rewrite the minimization problem to be:
Procedure: Find Optimal δA • Given the linearization in δA we can use the same properties as with δB to find bounds on δA • Using these bounds, we can run the same search algorithm as given for δB • This search will run on the full objective function, not the linearized version
Procedure: Output-Only Case • More important case for us given the motivating problem of financial data • Input for financial markets is unknown • Same procedure as given before • In finding the optimal weighting pattern: let u(k) = yact(k) for all k
Application • Implemented in MATLAB with a few additions to the Procedure • Tried on test input-output system • Discussion of the unsuccessful results for the test case
Application: Implementation • MATLAB chosen due to native handling of matrix operations • Few differences in implementation and procedure given before • Initial choice of C matrix is chosen as a random matrix with elements between 0 and 1 • If δ drops below some threshold, stop updating the corresponding matrix • After calculation, if A is an unstable matrix (i.e. |λmax| > 1) then restart with new initial C matrix • At end of implementation compare new value of objective function to previous one • If better by more than ε, iterate through again • If better by less than ε, stop and choose new realization • If worse by any amount, stop and choose old realization
Application: Input-Output Test • Run MATLAB code on well-defined state-space system:
Application: Input-Output Test • The resulting calculated realization was:
Application: Input-Output Test • The objective function had a value of 37.7 for this calculated realization • Easier to see in plots on next 3 slides. • Value of with x-axis of k • Output of the test system (first output only) • Output of the calculated system (first output only)
Application: Discussion • As shown, these results show this technique to be unsuccessful, this can be due to: • It is assumed that the δ values are small, which is not necessarily true • It is assumed that the convex combination will bring us towards a better solution, which is seen to not be the case • Changing from the initial minimization problem to finding the best approximation for the weighting pattern means that some of the relationships between the elements of [A,B,C] could be lost
Future Work • There are 2 types of techniques that may be useful to solve this problem and find a better solution than the shown solution: • Gradient Descent Method • Heuristic Approach
Future Work: Gradient Descent • Advantage: • Mathematically Robust • Proven that it will find a local minimum • Disadvantage: • Given m*n+n2+n*p variables, this will take a long time to solve • The objective function (as a sum of norms) is large, therefore the gradient may take an incredible amount of computational power and memory to compute and store
Future Work: Heuristic Approach • Example: Genetic Algorithm, Simulated Annealing • Advantage: • Can somewhat control level of computational complexity • Disadvantage • Only finds a “good” solution
Thank you Questions?