1 / 40

Selected Nonlinear Problems

Selected Nonlinear Problems. K. Sikorski C. Boonyasiriwat K. Litchfield C. Xiong. School of Computing University of Utah. Fixed Points – Ellipsoid Iteration C. Boonyasiriwat, C. W. Tsay. (2) Fixed Points – Combustion Chemistry C. Xiong, C. Wight.

miya
Download Presentation

Selected Nonlinear Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Selected Nonlinear Problems K. Sikorski C. Boonyasiriwat K. Litchfield C. Xiong School of Computing University of Utah

  2. Fixed Points – Ellipsoid IterationC. Boonyasiriwat, C. W. Tsay (2) Fixed Points – Combustion ChemistryC. Xiong, C. Wight (3) Multivariate Polynomial Systems(from AB = C) K. Litchfield

  3. (1)Fixed Point Globally Lipschitz Class Past work: Various complexity and algorithmic results – ellipsoid (exterior and interior), multivariate bisection-envelope algorithms. Focus:

  4. Exterior Ellipsoid Algorithm Residual Complexity: (q= 1) Result 1: Numerically Stable Implementation

  5. 1 F 0 S(F) 1 Directionally Lipschitz Class Vassin, V. Eremin, E., 2005, Feyer Type Operators and Iterative Processes (in Russian), Russian Acedemy of Sciences. Vassin, V., Ill-posed problems with a priori information: methods and applications, Russian Academy of Sciences.

  6. Result 2: (a) The complexity of Exterior Ellipsoid Algorithm in DL Class is: (b) The complexity of Interior Ellipsoid Algorithm in DL Class is:

  7. Numerical Tests 2-9 dimensional functions Comparison to Simple Iteration Algorithm

  8. where z is a complex variable and c is a complex constant.

  9. where for i = 1, 2 and and m is an arbitrary integer.

  10. where for i = 1, 2 and and m is an arbitrary integer.

  11. where

  12. where

  13. where

  14. (2)Fixed Point Problem – Combustion Chemistry The burning surface temperature Ts of explosive material (HMX, etc) depends uniquely on the solid initial temperature T0 and gas phase pressure P, and is a solution of the fixed point problem:

  15. The function G(Ts) is defined as: where the condensed phase mass flux m is:

  16. The function G(Ts ) can be simplified by combining all the constants: A zero finding problem is therefore defined as: Given the values of T0 and P, the Ts and m are uniquely determined by solving

  17. Class of Functions :

  18. A hybrid bisection-secant (BRS) method that combines: Two steps of regula-falsi method Secant method One step of bisection method Guaranteed to converge in worst case and optimal on the average Number of iterations in the average case

  19. Iteration Graph of the BRS method: 60x50 sampling points in the domain 280 < T0 < 460 K and 0 < P < 3000 atm Average = 10.5 and Maximum = 13 for

  20. Improvements to The HBS Method: Run two steps of hyper-bisection method in addition to the original HBS: Different parameter for subdomains of the T0 and P. Parameter is given by: for all subdomains, where

  21. Iteration Graph of the MBRS method: Average = 5.7 and Maximum = 6 for

  22. Conclusions: A hybrid bisection-secant method was developed for solving nonlinear equations derived from a combustion model Two more steps of hyper-bisection method in addition to the original algorithm reduce the average number of iterations from 10.5 to 5.7 For different explosive materials, another sets of parameters are expected to efficiently solve the fixed point problem

  23. 3. DIVIDE-AND-CONQUER ALGORITHMS FOR MATRIX MULTIPLICATION We consider the matrix multiplication problem of computing Z = X Y and generalize on the algorithm found by Strassen. Each matrix is n x n, and we partition each into l2 submatrices, each of size n/l x n/l. (Rows and columns of zeros are added if necessary.) The submatrices of X, Y, and Z will be denoted by Xij, Yij, and Zij (i,j=1,… ,l).

  24. We construct recursive, divide-and-conquer algorithms which use m multiplications of pairs of n/l x n/l matrices. (For Strassen's algorithm, l = 2 and m = 7. For the trivial algorithm, m = l3.)

  25. Known Matrix Multiplication Algorithms |exponent for | m for l = | algorithm || complexity | 2 | 3 | 4 | if known | | 3.000 | 8 | 27 | 64 | trivial || 2.854 | | 23 | | Laderman’s || 2.814 | | 22 | | || 2.807 | 7 | | 49 | Strassen’s || 2.771 | | 21 | | || 2.585 | 6 | | 36 | || 2.41 | | | | Umans,Cohn,|| | | | | et al || 2.377 | | | 27 | || 2.376 | | | |Coppersmith,|| | | | | Winograd || 2.350 | | | 26 | || 2.000 | 4 | 9 | 16 | |

  26. Where Strassen's algorithm uses only integral multiples of the submatrices, we allow any complex multiples. The class of algorithms is given conceptually by two sets of equations. We compute products, Mk, k = 1, …, m, of pairs of n/l x n/l matrices computed using the submatrices of X and Y and constant coefficients.

  27. We then compute the l2 submatrices of Z using the Mk and more constant coefficients.

  28. The needed coefficients are solutions to this system of equations: The right-hand side is 1 if, and only if, the Zij term in the definition of matrix multiplication contains the XefYgh term. Each system of equations in the family has l6 equations in 3 l2m variables. The equations are trilinear (meaning linear in each variable but involving products of three variables) and nonhomogeneous.

  29. Each solution to one of these systems of equations provides the coefficients for one of the algorithms, and each such algorithm provides a solution to the corresponding system of equations. The smallest system of equations known to have solutions is the one for l = 2 and m = 7. Strassen's algorithm is one of these solutions. This system has 64 equations in 84 complex variables.

  30. The goal is to find solutions for many of the systems of equations for which no solutions are known or to prove that no solutions are possible. The initial series of attempts found more than 300 approximate solutions for the l = 2 and m = 7system but did not find solutions for any of the other systems.

  31. About Solving the Systems | | | number of | name of || l | m |equations|variables| example || 2 | 4 | 64 | 48 | (too good) || 2 | 5 | 64 | 60 | || 2 | 6 | 64 | 72 | || 2 | 7 | 64 | 84 | Strassen’s || 2 | 8 | 64 | 96 | trivial || 3 | 9 | 729 | 243 | (too good) || | | ... | | || 3 | 23 | 729 | 621 | Laderman’s || | | ... | | || 3 | 27 | 729 | 729 | trivial || 4 | 16 | 4096 | 768 | (too good) || | | ... | | || 4 | 48 | 4096 | 2304 | || 4 | 49 | 4096 | 2352 | (Strassen’s)|| | | ... | | || 4 | 64 | 4096 | 3072 | trivial |

  32. Solution Strategy Success solving l = 2 and m = 7 system.●Uses improvement to line search method.● Minimizes sum of squares of absolute values of implicit equations. Stages of line search step:● Select suitable line (many choices).● Compute best improvement on line directly.● Update variable values. Computing the best improvement directly is the line search method improvement. It works for systems of equations that are linear in each variable where we are minimizing the sum of squares of absolute values of the implicit equations.

  33. Program Flow Establish initial variable values. Compute “sum of squares.” For each line search step:● Establish a line using partial derivatives of “sum of squares” w.r.t. real and imaginary parts of all variables.● Edit those down to form suitable line.● Compute improvement parameters and expected result.● Update variable values.● Compute new “sum of squares.”● Test actual result against expected result. Output final results.

  34. Line suitability:● If xy are in same product of any equation, at most one may change on line.● For these trilinear equations, at most one-third of variables may change on line. First attempts:● Randomly chose 80 lines.● Computed what each would do.● Used best.● Took 2 to 4 hours (l = 2 and m = 7).

  35. Current attempts:● Compute partial derivatives of “sum of squares” w.r.t. real and imaginary parts of each variable to make an “unsuitable” line.● Zero enough of them to make a suitable line.● Use line.● Takes 35 seconds (l = 2 and m = 7).● Five minutes gets close to full computer precision.

  36. Termination considerations:● We expect local minima often.● We may get saddle (small improvement before significant improvement). Termination conditions:● At first, whenever an improvement becomes too small.● Now, when many consecutive improvements are all too small.

  37. Further Research ● Get “best” accuracy.● Determine improvements over using partial derivatives of “sum of squares” for line selection.● Refining termination conditions.● Understand effects of variable starting values.● Pursue useful results from l = 4 and m = 48 system (4096 equations in 2304 complex variables). Each line search step is taking about 4 seconds, with fairly obvious convergence toward local minima in 36 steps.● Make program more usable by other people for other systems of equations.● Find more ways to convert systems to be linear in each variable.

  38. F(X)=X • High dimensional problems • Infinity norm case • Sparse functions • Interior Ellipsoid implementation • Applications Open Problems

More Related