5.4 collection of definitions

Basic solution for LP
This is solution \(\vec{x}\) which has non zero entries that correspond to linearly independent column in \(A\). Where the constraints are \(Ax=b\).
feasible solution for LP
This is solution \(\vec{x}\) which is in the feasible region. The region that satisfy the constraints. Feasible solution do not have to be basic.
basic and feasible solution for LP
This is solution \(\vec{x}\) which is both feasible and basic. Once we get to one of these, then simplex algorithm will jump from one basic feasible to the next, while reducing the \(J(u)\) objective function until optimal is found.
Basic but not feasible solution
is there one? Need example.
Newton Raphson method
Iteration is \[ u^{k+1}= u^{k} - \frac{\nabla J(u^{k})}{ \nabla ^{2} J(u^{k}) } \] where \(\nabla ^{2} J(u^{k})\) is the hessian. This is a \(A\) matrix in the quadratic expression \[ J(u) = \frac{1}{2} u^{T} A u + b^{T} u + c \] Of course we can’t divide by matrix, this is the inverse of the Hessian. So the above is \[ u^{k+1} = u^{k} - \left [ \nabla ^{2} J(u^{k}) \right ] ^{-1} \nabla J(u^{k}) \] See handout Newton for example with \(J(u)\) given and how to use this method to iterate to \(u^{\ast }\). If \(J(u)\) was quadratic, this will converge in one step.
Quadratic expression
An expression is quadratic if it can be written as

\[{\displaystyle \sum \limits _{i}}{\displaystyle \sum \limits _{j}} a_{ij}u_{i}u_{j}+{\displaystyle \sum \limits _{i}} b_{i}u_{i}+c \] For example, \(x_{1}^{2}+9x_{1}x_{2}+14x_{2}^{2}\) becomes \begin{align*} x_{1}^{2}+9x_{1}x_{2}+14x_{2}^{2} & =a_{11}x_{1}x_{1}+a_{21}x_{2}x_{1}+a_{12}x_{1}x_{2}+a_{22}x_{2}x_{2}+b_{1}x_{1}+b_{2}x_{2}+c\\ & =a_{11}x_{1}^{2}+a_{21}x_{2}x_{1}+a_{12}x_{1}x_{2}+a_{22}x_{2}^{2}+b_{1}x_{1}+b_{2}x_{2}+c \end{align*}

comparing both sides, we see that by setting \(a_{11}=1,a_{21}=\frac{9}{2},a_{21}=\frac{9}{2},a_{22}=14\) and by setting \(b_{1}=0,b_{2}=0\) and \(c=0\) we can write it in that form. Hence it is quadratic and \[ A= \begin{pmatrix} 1 & \frac{9}{2}\\ \frac{9}{2} & 14 \end{pmatrix} ,b=\begin{pmatrix} 0 & 0 \end{pmatrix} \] Therefore \begin{align*} x_{1}^{2}+9x_{1}x_{2}+14x_{2}^{2} & =x^{T}Ax+b^{T}x+c\\ & =\begin{pmatrix} x_{1} & x_{2}\end{pmatrix}\begin{pmatrix} 1 & \frac{9}{2}\\ \frac{9}{2} & 14 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} +\begin{pmatrix} 0 & 0 \end{pmatrix}\begin{pmatrix} x_{1}\\ x_{2}\end{pmatrix} +0 \end{align*}

Since we are able to write \(x_{1}^{2}+9x_{1}x_{2}+14x_{2}^{2}=x^{T}Ax+b^{T}x+c\) it is quadratic. Notice that the \(A\) matrix is always symmetric.

superlinear convergence
A sequence \(\{ u^{k} \}\) in \(\Re ^{n}\) is said to converge superlinearly to \(u^{\ast }\) if the following holds. Given any \(\theta \in (0,1]\) then \[ \lim _{k \to \infty } \frac{\| u^{k} - u^{\ast }\|}{\theta ^{k}} \to 0 \] Example is \(u^{k} = e^{-k^{2}}\) Since \(u^{\ast }=0\) then \(\frac{ e^{-k^{2}} }{\theta ^{k}} \to 0\) no matter what \(\theta \in (0,1]\) is. Remember, it has to go to zero for any \(\theta \)
Quadratic convergence theorem
Given quadratic \[ J(u)= \frac{1}{2} u^{T} A u + b^{T} u + c \] And given \(N\) set of mutually conjugate vectors (with respect to A) \(\{v^{0},v^{2},\dots ,v^{N-1}\}\) then the conjugate direction algorithm converges to the optimal \(u^{\ast }= -A^{-1} b\) in \(N\) steps of less. Proof in lecture 3/1/2016 (long)
A-conjugate vectors
There are mutually conjugate vectors with respect to \(A\). The directions \(\{v^{0},v^{1},\dots ,v^{N-1} \}\) are said to be mutually conjugate with respect to \(A\) if \[ (v^{i})^{T} A v^{j} = 0 \] For all \(i\neq j\)