250 likes | 447 Views
Topics 3: Polynomials. Discrete structures. Algebraic complexity. Symbolic-numeric. Lecture 8. Arithmetic with polynomials, rational function and power series. Computing with univariate polynomials is similar to long integer arithmetic.
E N D
Topics 3: Polynomials. Discrete structures. Algebraic complexity. Symbolic-numeric Lecture 8
Arithmetic with polynomials, rational function and power series • Computing with univariate polynomials is similar to long integer arithmetic. • Polynomials are represented by their coefficient list whereas integers are given by their binary (bit) expansion. • In practical applications polynomials are often sparsely populated, i.e. many of the coefficients are zero. • In this case, the internal representation consists of lists of pairs (index, non-zero coefficient) to encode only non-zero terms. • Usual convolution for polynomial multiplication is less efficient than Fast Fourier Transform (FFT). • Convolution is the bilinear form associated to the synthetical encoding of polynomials as a coefficient list, whereas FFT recalls that a polynomial is a function; • FFT evaluates polynomial at sufficiently many points and then interpolates to get the coefficients of the product polynomial (it uses power series expansions). • Straight-line programs (SLP) model semi-numerical computations and polynomial evaluation with floating point arithmetics. • A SLP for polynomial evaluation is a direct acyclic graph whose nodes are labelled gates which perform arithmetic operations involving precomputed results. • SLP can be also used to represent polynomials. • For example, the determinant of a generic n × n matrix is a sparse polynomial with n! non-zero terms, whereas a SLP encoding of such determinant polynomial requires no more than O(n^4) memory units. • Efficient differentiation of polynomials can be also done using the SLP encoding.
Arithmetic with polynomials, rational function and power series • Besides basic arithmetic, some higher level operations are of interest: euclidian division, GCD and factoring. • For example, the Lenstra-Lenstra-Lovasz (LLL) basic reduction procedure outputs an exact factor of a given polynomial starting from an approximate factor. • For multivariate polynomials given in either dense or sparse encoding, two main data structures are currently used: • either recursive presentation (as univariate polynomials whose coefficients are polynomials in the remaining variables) or • in distributive representation (as sum of monomials, each consisting of a coefficient and a product of powers of the variables). • Recursive algorithms (such as factoring ad resultant computations) favor the first representation, whereas the second one is used whenever single terms have to be manipulated (e.g. in term ordering in Groebner basis computations).
Arithmetic with polynomials, rational function and power series • Rational functions are represented using techniques analogous to those used to compute with rational numbers. • Due to their infinitary nature, formal power series, Laurent series and Puiseax series cannot be finitely represented in general. • Exceptions are series whose coefficients are given by an explicit formula or by some recursion. • In that case, series can be stored in a CAS as finite list of coefficients, together with a function for computing more coefficients on demand. • This so-called lazy evaluation technique is implemented in Axiom for example. • An alternative method uses truncation to some fixed previously chosen order. • Inversion and division of power series, Laurent series and Puiseux series can be based on the same data structures, for example by computing the coefficients recursively.
Division algorithm for polynomials • Most symbolic algorithmic methods are based on the division algorithm for polynomials. • The algorithm produces by successive reduction steps for a given pair (f, g) in K[X]^2 (univariate polynomials over a field K) with deg(g) > 0 a uniquely determined pair (q, r) in K[X]^2 such that f = qg + r and r = 0 or deg(r) < deg(g). • Iterative application of the division algorithms leads to the Euclidian algorithm which returns a GCD of f and g; • the algorithm can be augmented to be extended Euclidian algorithm that returns in addition the cofactors of f and g in the representation of their GCD as a linear combination of f and g. • The Euclidian algorithm is also the basic ingredient for the squarefree factorization of polynomials. • Through multiplication of the dividend f by an appropriate power of the highest coefficient of the divisor g, the division algorithm can also be performed in a fraction-free form. • For univariate polynomials with indeterminate coefficients, or multivariate polynomials constructed as such, it yields, up to a change in sign and a power of the highest order coefficients, the resultant of f and g. In the special case where g = f’ the resultant becomes the discriminant of f. • The extended Euclidian algorithm for univariate polynomials over a field has many important applications: the Chinese remainder theorem, interpolation, inversion in extension rings, linear recurrence sequences, linear algebra for sparse matrices etc. • The squarefree part f_1 · · · f_r of a univariate polynomial f = f_1^(e_1) · · · f_r^(e_r) where f_1, . . . , f_r are irreducible and pairwise non-associated, is f/GCD(f, f’).
Factorization of polynomials • A polynomial in several variables over a field factors uniquely into a product of irreducible polynomials. • The most basic task of factorization is the case of a univariate polynomial of degree n over a finite field F_q with q elements, where q is the power of a prime number. • Berlekamp devised a deterministic algorithm with the number of operations in F_q of order n^3 + nq, while Cantor-Zassenhaus algorithm requires a number of order n^3 log n. • A typical CAS factors polynomials of degree n of hundred orders over a field with q = 2^n elements in several hours. • For polynomials over Q or over algebraic number fields the method of choice is factoring modulo a prime p and Hensel lifting. • The method of factor combination, based on exhaustive search, has exponential running time for certain inputs. • Very fast algorithms for numerically factoring algorithms over R and C are performed by computing root approximations. • For multivariate polynomials with small number of variables, a variant of Hensel lifting can be used.
Polynomial decomposition • Polynomial decomposition is the problem of representing a given polynomial f(x) as a functional composition g(h(x)) of polynomials of smaller degree. • It is useful in simplifying the representation of field extensions of high degree. • The problem has application in robot kinematics, faithful reparametrization of unfaithfully parameterized curves, etc.
Groebner bases • In rings R = K[X_1, . . . ,X_n] of multivariate polynomials the construction of Groebner bases takes over the role of the Euclidian algorithm for univariate polynomial sets as an agorithm for the study ideals in R, and varieties polynomial sets in R in extension fields of K. • Every non-zero polynomial in R is a finite sum of monomials at where a in K is nonzero and t is a term, i.e. a power-product of the indeterminates X_i. • A linear order on the multiplicative monoid T of terms is called an (admissible) term order if 1 is the smallest term and multiplication of terms is monotone wrt. the order. • Any term order is a well-order on T and induces a well-quasi-order on R. • One can consider also other term orders where 1 is not required to be the smallest term; these orders are n general not well-orders on T. • After choosing a fixed term order on T, one can define the reduction of a polynomial f by a set P of other polynomials in complete analogy to the division of univariate polynomials. • Iterative application of such reductions will eventually produce a normal form of f with respect to P. In general normal forms are not unique.
Groebner basis • A finite set G R is a Groebner basis of the ideal I generated by G iff the highest term of every nonzero polynomial f in I is a multiple of the highest term of a polynomial in G. • This means that the cancellation of high monomials in linear combinations of polynomials from G is insignificant. • An equivalent condition requires that normal forms of arbitrary polynomials in R wrt. reduction relative to G are unique. • Given two polynomials f, g in R, there is an obvious minimal linear combination h of f and g with monomial coefficients, where a cancellation of highest monomials occurs; • this polynomial h is called the S-polynomial of f and g. • Buchberger’s algorithm construct a Groebner basis G from a given finite ideal basis F. • The algorithm adds iteratively non-zero normal forms of S-polynomials (critical pairs) of pairs of polynomials in F to F. • The termination of the process is guaranteed. • The algorithm leaves a lot of lee-way concerning the order in which S-polynomials are considered and reduced to normal form. • There are a-priori criteria that recognize certain S-polynomials as superfluous, since their normal form is guaranteed to be zero at the present or a later stage of the algorithm by theoretical reasons.
Groebner bases • Numerous schemes have been proposed and tested in order to speed up Groebner basis computations in general or for special types of input systems. • In the worst case, the total degree of polynomials in an ideal basis may grow doubly exponentially during the computation of a Groebner basis; nevertheless the computations can be in principle always be performed in exponential work space. • A complexity problem arising frequently in practice is the occurrence of very large rational coefficients in Groebner basis computations over the field of rational numbers. • Reduced Groebner bases are Groebner bases that are interreduced as polynomial sets and consists of monic polynomials. • Can be obtained from an arbitrary Groebner basis with respect to the same term order by iterative interreduction of the polynomials in the basis. • In contrast to an arbitrary Groebner basis, a reduced one is uniquely determined by the ideal and the term order. • Groebner walk algorithms use the important fact that a given polynomial ideal has only finitely many different reduced Groebner bases with respect to all term orders.
Groebner bases • Groebner bases (sometimes in combination with other polynomial algorithms, like factorization) allow for the solution of a set of problems related to multivariate polynomials and their zeros: • Ideal membership, • inclusion among ideals, • computation of intersection and quotient of ideals, • Computation of the radical of an ideal, • decomposition into primary ideals, • solution of systems of linear equations, • transformation of systems of algebraic equations into triangular form, • geometric theorem proving, • linear integer programming • etc. • Buchberger’s Groebner basis method for multivariate polynomials over a field is implemented in most general purpose computer algebra systems, as well as in many special purpose systems like CoCoA, Felix, Macaulay, and Mas.
Algorithmic invariant theory • Consider a group G of GL_n(K) of invertible matrices over a field K. • Then G acts on the polynomial ring K[x_1, . . . , x_n] by linear transformations of the variables. • The invariant ring K[x1, . . . , xn]G consists of al polynomials which are fixed by the action of G. • A polynomial is an invariant if it is constant on each orbit of the action of G on K_n. • A classical goal of invariant theory is to find a finite set of invariants which generate K[x_1, . . . , x_n]^G as an algebra over K. • A typical examples is the case that G is the symmetric group S_n acting by permutations of the x_i. • Invariant theory has applications in many fields: • equivariant dynamics, • solving systems of algebraic equations with symmetries, • theoretical physics and chemistry, • coding theory, • image processing, • combinatorics, • incidence geometry, • cohomology of finite groups. • Some CA packages are devoted to invariant theory: the invariant package of Mas, the Maple packages InVar and Symmetry, the Singular packages FinVar and InVar, the Magma package for invariants.
Algebraic Methods for Constructing Discrete Structures Constructing and investigating discrete structures by algebraic methods has the following goals: • lowering the complexity by using algebraic instead of combinatorial methods (symmetry approach, symmetry adapted bases, etc.); • identifying isomorphism classes of certain discrete structures (e.g. graphs, physical states, chemical isomers) with orbits of finite groups acting on finite sets; • transforming combinatorial problems into equivalent algebraic ones, especially into problems of group theory, or problems of linear and multi-linear algebra (e.g. t-designs expressed as solutions of Diophantine equations); • enumerating such structures for given parameters (for example graphs with given numbers of vertices and edges); • generating a complete list of all structures with prescribed properties for given parameters; • generating such structures uniformly at random.
Algebraic Methods for Constructing Discrete Structures • A typical non-mathematical example arises with isomers in chemistry: for a given chemical formulas one has to construct all connected multigraphs where the degree of each vertex corresponds t the atom’s valences. • From this problem originated graph theory as well as combinatorial enumeration theory. • Standard methods of CA, like dynamic trees, union-find, and optimization algorithms of graph theory play an important role in solving such problems. • Character tables and permutation characters have been used efficiently for constructing low rank permutation representations, and small diameter arc-transitive and edge-transitive graphs, especially distance transitive graphs. • General group theoretic packages can be used in order t investigate the structure and properties of discrete structures, for example, the comprehensive computer algebra systems Gap and Magma. • Some additional packages and other special purpose systems deserve separate mention; principally: • Grape is a Gap share library package for computing with G-graphs, that is, a graph together with a group G of automorphisms of that group (it was used in the classification of the primitive distancetransitive graphs for sporadic groups, in investigations in discrete combinatorial geometries); • Co-Co was developed to investigate graphs and cellular rings associated with almost simple groups; • Discreta is a package that covers at present the construction of various kinds of t-designs.
Summation and Integration • Until quite recently the algorithmic task of summation, compare to its continuous analogue integration, has received relatively little attention within the CA community. • The current intensified interest is summation was initiated by the applications of algorithms for indefinite hypergeometric summation to problems of definite hypergeometric summation. • Proving binomial identities and finding recurrences satisfied by such sums, a task notoriously difficult for the human mathematician, has become a routine business for computers. • There are several packages dealing with the summation, e.g. sumtools for Maple and hyp and hypq for Mathematica. • An elementary function of a variable x is a function that can be obtained from the rational functions in x by repeatedly adjoining a finite number of nested logarithms, exponentials, and algebraic numbers of functions. • Trigonometric functions and their inverses are also elementary (when they are written using complex exponentials and logarithms).
Summation and Integration • The problem of integration in terms in finite terms is, given an elementary function f(x), to decide in a finite number of steps whether R f(x)dx is also an elementary function, and to compute it explicitly if it is. • The first general algorithm for integrating elementary functions was presented by Risch. • That algorithm was not ripe for a complete implementation however, as a number of computational problems remained open. • An algorithm for pure algebraic functions for restricted class of integrands (nested square roots) was implemented in Reduce. • An alternative method is implemented in Axiom and Maple. • An improved algorithm for the case of hyperelliptic integrands (square roots of rational functions) was implemented in Axiom. • Another extension that included nested algebraic and trancendental terms is the integrand was partially implemented also in Axiom.
Symbolic Methods for Differential Equations • Differential equations are of central importance in applied mathematics. Whenever a continuous process must be modelled in natural or engineering sciences (and increasingly often in social sciences), chances are high that differential equations are used. • Traditionally, the main emphasis in computer algebra has been laid on solving differential equations in closed form. • This can still be observed in differential Galois theory and Lie symmetry theory. • The Painleve theory studies the behavior of solutions in the neighborhood of singularities and provides an important test for complete integrability. • For general systems of differential equations completion is a first step; • it checks the consistency and determines the size of the solution space. In this algebraic variants this leads to differential ideal theory where one tries to lift as much as possible of commutative algebra to differential equations. • Dynamical systems theory is often considered as being dominated by numerical computations, but as more and more complicated systems are studied the need for symbolic manipulation is rapidly increasing
Algebraic Complexity Theory • In algebraic complexity theory, algebraic problems (such as polynomial evaluations, interpolation, matrix multiplication, or factorization of polynomials) are studied within an algebraic model of computation (straight-line programs, computation trees, etc.). Such a model of computation specifies the possible inputs, the admissible operations which can be performed in each step, and the computational cost of a single step. • Lower bounds for the complexity of a computational problem are asked. • That is, for lower bounds for the sum of the costs of all single steps of each admissible algorithm, which solves a given problem. • The goal of algebraic complexity theory is to derive lower bounds for the complexity of algebraic problems. • Ideally, the upper bound achieved by an algorithm coincides with a lower bound, which should show the optimality of that algorithm within the given model of computation. • Deriving lower bounds is a difficult task, since we have to make the case for any conceivable algorithm within the model which solves a given problem. Nevertheless, a deep theory has emerged.
Algebraic Complexity Theory • For instance for computing the coefficients of a univariate polynomial of degree n from its roots, as well as for computing the interpolating polynomial, a lower bound of order n log n for the number of required multiplications and divisions is obtained. • Matrix multiplication plays a key part in many computational problems of numerical linear algebra. • Strassen’s optimality studies shows that just O(n^2.82) arithmetic operations are sufficient for multiplying two n to n matrices. • Later faster matrix multiplication algorithms with exponents bellow 2.38 were build. • Today, it is conjectured that any exponent greater than 2 can be achieved. On the other hand only linear lower bounds are known for the matrix multiplication algorithm. • There are well-known fast algorithms for performing the discrete Fourier transform of length n working with O(n log n) arithmetic steps. • These algorithms are of fundamental importance to computer algebra: among others, fast algorithms for multiplying integers (Scoenhage-Strassen) and polynomials rely on it. • Today the question whether a Fourier transform of length n requires at least n log n operations is still open.
Algebraic Complexity Theory • The degree method can be applied to study programs with branchings (computation trees). • A fast variant of Euclid’s algorithm for univariate polynomials of degree n due to Knuth and Schoenhage requires only O(n log^2 n) arithmetic operations, and only O(n log n) multiplications and divisions. • It was proved the optimality of this algorithm with respect to the required multiplicative steps. • Various problems in computational geometry can be formalized as semi-algebraic membership problems, which are solved by computation trees over the reals. • A lower bound on the complexity of such problems can be derived in terms of the number of connected components of semi-algebraic sets. • While NP-completness had been developed originally in the Turing machine model, analogous concepts in algebraic frameworks of computation have been developed meanwhile.
Computer Analysis • Whereas classical areas of computer algebra are dominated by algorithms which are based on algebraic concepts, and which aim to find closed symbolic solutions, the main emphasis of computer analysis is on using CAS for computing approximate solutions in a controlled manner. • Its goals are • to find a simple and transparent form of an expression, while maintaining an appropriate level of precision; • to determine error bounds, which are computed entirely by a program, the same way as approximations are determined without requiring user interaction. • Computer analysis algorithms combine the methods of numerical computing with symbolic procedures.
Computer Analysis • Finding analytic approximations for differential equations is one of the main tasks of computer analysis. • Preference is given to mathematical approximation methods allowing an automatic adaption to a given problem and giving a transparent short formula expression at a reasonable expense. • Good results can be achieved by a two-step concept. • In the first step the given problem will be adapted by a suitable chosen neighbor problem whose exact solution can be determined by means of computer algebra algorithms. • The fundamental solutions form the base for an adequate approximate solution in the second step. • All algorithms developed by now are completed by possibilities for implementable error estimates.
Algorithms for Computing Validated Results • Verification algorithms are somewhere on the boundary between computer algebra algorithms and numerical algorithms. • With the former they share the strict validity of every result; like the latter they are very efficient, usually requiring a computing time over slower by a factor of five to ten compared to the best known numerical algorithm. • The basic principle of verification algorithms or algorithms for computing validated results, also called algorithms with automatic result verification is a follows. • First, a floating point algorithm is used to compute an approximate solution for a given problem. • This approximation is hopefully of good quality; • however no quality assumption is used at all. • Second, in the final verification step, either error bounds are computed for the previously calculated approximation or, an error message signals that error bounds could not be computed. • Any computed result is always perfectly correct in the sense that all possible conversion errors, rounding errors, approximation errors or others are rigorously estimated.
Algorithms for Computing Validated Results • The calculation of the actual error bounds must use some estimation of the errors of the individual operations. • This can be done by standard error analysis or, most convenient, by using interval operations. • There are a number of public domain and commercial libraries for interval arithmetic and algorithms. • The IntLab is completly written in Matlab comprises of all interval operations, including real, complex and sparse matrices, standard functions, automatic differentiation etc.
Hybrid methods • The main goal of hybrid symbolic-numeric computation is to extend the domain of efficiently solvable problems by combining the methods of numerical and symbolic computations. • A method is hybrid symbolic numeric (or seminumerical) provided that is solves a mathematical problem, and involves some aspects of numerical computing and some aspects of symbolic computing. • Seminumerical algorithms can be found in various topics: • conversion of polynomial problems to eigenvalue problems; • polynomial arithmetic (including GCD) with numerical coefficients; • Interval arithmetic; • symbolic pre-computation of expressions for later efficient and/or stable numerical evaluation; • code generation by computer algebra systems for later solution of numerical problems (for example PDEs); • automatic differentiation of formulas and programs; • construction of special-purpose numerical methods for differential equations that automatically preserve invariants; • exact computation by intermediate use of floating-point arithmetic, for sake of speed.