250 likes | 261 Views
This article explains the concepts of convexity and linear algebra basics, including convex sets, convex functions, linear independence, rank of a matrix, and solution sets of linear equations.
E N D
Backgrounds-Convexity • Def: line segment joining two points is the collection of points
Def: is called convex set iff whenever Nonconvex set Convex sets
Def: The convex hull of a set S is the set of all points that are convex combinations of points in S, i.e. conv(S)={x: x = i = 1k i xi, k 1, x1,…, xkS, 1, ..., k 0, i = 1k i = 1} • Picture:1x + 2y + 3z, i 0, i = 13 i = 1 1x + 2y + 3z = (1+ 2){ 1 /(1+ 2)x + 2 /(1+ 2)y} + 3z (assuming 1+ 2 0) z x y
Proposition: Let be a convex set and for , define Then is a convex set. Pf) If k = 0, kC is convex. Suppose For any x, y kC, Hence the property of convexity of a set is preserved under scalar multiplication. Consider other operations that preserve convexity.
Convex function • Def: Function is called a convex function if for all x1 and x2, f satisfies Also called strictly convex function if
Meaning: The line segment joining (x1, f(x1)) and (x2, f(x2)) is above or on the locus of points of function values.
Def: f: Rn R. Define epigraph of f as epi(f) = { (x, ) Rn+1 : f(x) } • Equivalent definition: f: Rn R is a convex function if and only if epi(f) is a convex set. • Def:f is a concave function if –f is a convex function. • Def:xC is an extreme point of a convex set C if x cannot be expressed as y + (1-)z, 0 < < 1 for distinct y, z C ( x y, z ) (equivalently, x does not lie on any line segment that joins two other points in the set C.) : extreme points
Review-Linear Algebra • inner product of two column vectors x, y Rn : • x’y = i = 1n xiyi • If x’y = 0, x, y 0, then x, y are said to be orthogonal. In 3-D, the angle between the two vectors is 90 degrees. • ( Vectors are column vectors unless specified otherwise. But, our text does not differentiate it.)
Def: is said to be linearly dependent if , not all equal to 0, such that ( i.e., there exists a vector in which can be expressed as a linear combination of the other vectors. ) • Def:linearly independent if not linearly dependent. In other words, (i.e., none of the vectors in can be expressed as a linear combination of the remaining vectors.) • Def:Rank of a set of vectors : maximum number of linearly independent vectors in the set. • Def:Basis for a set of vectors : collection of linearly independent vectors from the set such that every vector in the set can be expressed as a linear combination of them. (maximal linearly independent subset, minimal generator of the set)
Thm)r linearly independent vectors form a basis if and only if the set has rank r. • Def:row rank of a matrix : rank of its set of row vectors column rank of a matrix : rank of its set of column vectors • Thm) for a matrix A, row rank = column rank • Def :nonsingular matrix : rank = number of rows = number of columns. Otherwise, called singular • Thm) If A is nonsingular, then unique inverse exists.
Simutaneous Linear Equations Thm:Ax = b has at least one solution iff rank(A) = rank( [A, b] ) Pf)) rank( [A, b] ) rank(A). Suppose rank( [A, b] ) >rank(A). Then b is lin. ind. of the column vectors of A, i,e., b can’t be expressed as a linear combination of columns of A. Hence Ax = b does not have a solution. ) There exists a basis in columns of A which generates b. So Ax = b has a solution. Suppose A: mn, rank(A) = rank [A, b] = r. Then Ax = b has a unique solution if r = n. Pf) Let y, z be any two solutions of Ax = b. Then Ay = Az = b, or Ay – Az = A(y-z) = 0. A(y-z) = j=1nAj(yj – zj) = 0. Since column vectors of A are linearly independent, we have yj – zj = 0 for all j. Hence y = z. (Note that m may be greater than n.)
Operations that do not change the solution set of the linear equations (Elementary row operations) • Change the position of the equations • Multiply a nonzero scalar k to both sides of an equation • Multiply a scalar k to an equation and add it to another equation Hence X = Y. Solution sets are same. • The operations can be performed only on the coefficient matrix [A, b], for Ax = b.
Solving systems of linear equations (Gauss-Jordan Elimination, 변수의 치환) (will be used in the simplex method to solve LP problems)
Elementary row operations are equivalent to premultiplying a nonsingular square matrix to both sides of the equations Ax = b
So if we multiply all elementary row operation matrices, we get the matrix having the information about the elementary row operations we performed
Finding inverse of a nonsingular matrix A. Perform elementary row operations (premultiply elementary row operation matrices) to make [A : I ] to [ I : B ] Let the product of the elementary row operations matrices be C. Then C [ A : I ] = [ CA : C ] = [ I : B] Hence CA = I C = A-1and B = A-1.