1.15 Lecture 13. Thursday October 9 2014. Default norms, convergence, Picard

Will finish material on vector spaces today. Next we will solve state space equation.

Default norms:

There are many norms in \(\Re ^{n}\), we will use \(\left \Vert{}\right \Vert \) to indicate default Euclidean norm vs. \(\left \Vert{}\right \Vert _{\infty }\) for maximum norm (the norm of the vector is its maximum component) Same idea will be used for other vector spaces. For matrix norm, we will talk about induced norm. Say \(M\in \Re ^{m\times n}\) is a matrix. (i.e. matrix of dimensions \(m,n\) with elements in the real. Then define \begin{equation} \left \Vert M\right \Vert =\max \left \Vert Mx\right \Vert \text{ \ for all }\left \Vert \vec{x}\right \Vert =1 \tag{1} \end{equation}

What this means, is that we apply \(Mx\) to all vectors \(\vec{x}\) which has norm \(\left \Vert \vec{x}\right \Vert _{2}=1\), and look at the generated vector \(v=Mx\), then apply standard vector norm to \(v\) (Euclidean) as in \(\left \Vert v\right \Vert _{2}\).  We pick the largest norm \(\left \Vert v\right \Vert _{2}\) that results. We call this norm as the norm of \(M\). This value is the induced norm of \(\left \Vert M\right \Vert \).

Reader: Does the above definition define a norm? Recall a norm \(\left \Vert{}\right \Vert \) must satisfy three properties from last lecture. For the zero matrix, easy to show. Scaling is also easy. Now for the triangle inequality, which says \(\left \Vert M_{1}+M_{2}\right \Vert \leq \left \Vert M_{1}\right \Vert +\left \Vert M_{2}\right \Vert \). Show this.

Reader: Show that (1) is equivalent to \(\left \Vert M\right \Vert =\max \frac{\left \Vert Mx\right \Vert }{\left \Vert x\right \Vert }\) for all \(\left \Vert \vec{x}\right \Vert \neq 0\)

Reader: Show that (1) is equivalent to \(\left \Vert M\right \Vert =\sqrt{\lambda _{\max }\left ( M^{T}M\right ) }\) where \(\lambda _{\max }\left ( M^{T}M\right ) \) means the the largest eigenvalue of \(M^{T}M\). Sketch of proof was given, but needs more time to understand it.

Reader: Find the matrix norm induced by \(\left \Vert x\right \Vert _{\infty }\)

We will use the space of bounded functions and the subset of this space we will use most are the bounded continuous functions over some interval. note: Any continuous function is bounded function. We will call it \(B\left ( \left [ t_{0},t_{1}\right ] ,\Re ^{n}\right ) \).

A function \(f\left ( t\right ) \) is bounded if \(\left \Vert f\left ( t\right ) \right \Vert \leq \beta \) for some \(\beta <\infty \). The continuous functions \(C\left ( \left [ t_{0},t\right ] ,\Re ^{n}\right ) \in \not B ,\) We need a norm for \(C\left ( \left [ t_{0},t\right ] ,\Re ^{n}\right ) \). We will use \(\left \Vert f\right \Vert \) as the norm, which is the largest value of the function over \(\left [ t_{0},t_{1}\right ] \). Make sure not to confuse \(\left \Vert f\right \Vert \) and \(\left \Vert f\left ( t\right ) \right \Vert \). The first one is called the induced norm. i.e.\begin{equation} \left \Vert f\right \Vert =\max _{t_{0}\leq t\leq t_{1}}\left \Vert f\left ( t\right ) \right \Vert \tag{2} \end{equation} While \(\left \Vert f\left ( t\right ) \right \Vert \) is just normal Euclidean norm, and is defined only for specific \(t\). i.e. we fix \(t=t_{0}\) then calculate \(\left \Vert f\left ( t_{0}\right ) \right \Vert \), but \(\left \Vert f\right \Vert \) has no \(t\) in it. So this is the norm over the whole range and defined as in (2) above.

Reader: Show that (2) defines a norm. (I have a side note here about for non-negative functions, check what this is for??)

Now we will talk about norms on \(B\) where \(f\left ( t\right ) \) is not necessarily continuous function.

pict

When \(f\left ( t\right ) \) is not continuous, we will use \(\sup \) instead of \(\max \) in the definition, i.e. we write (2) as\begin{equation} \left \Vert f\right \Vert =\sup _{t_{0}\leq t\leq t_{1}}\left \Vert f\left ( t\right ) \right \Vert \tag{2A} \end{equation} We need one more thing before going to solve the state equation, which is

Reader: Show that \(\left \Vert{\displaystyle \int \limits _{0}^{t}} f\left ( t\right ) dt\right \Vert \leq{\displaystyle \int \limits _{0}^{t}} \left \Vert f\left ( t\right ) \right \Vert dt\) question: ask about what norm this is \(\left \Vert{}\right \Vert \) here. Use Riemann sum to proof this?

Reader: Similarly, using matrix norms, show that \(\left \Vert{\displaystyle \int \limits _{0}^{t}} A\left ( t\right ) dt\right \Vert \leq{\displaystyle \int \limits _{0}^{t}} \left \Vert A\left ( t\right ) \right \Vert dt\) where \(A\) is now matrix in \(\Re ^{m\times n}\)

Convergence:

A sequence \(\left \{ x_{k}\right \} _{k=1}^{\theta }\) in a normal vector space \(X\) is said to converge to \(x^{\ast }\in X\) if \[ \lim _{k\rightarrow \infty }\left \Vert x^{\ast }-x^{k}\right \Vert \rightarrow 0 \] Example: \(\lim _{k\rightarrow \infty }\begin{pmatrix} 1+\frac{1}{k}\\ -1^{k}e^{-k}\end{pmatrix} =\begin{pmatrix} 1\\ 0 \end{pmatrix} \) or we can just write \(\begin{pmatrix} 1+\frac{1}{k}\\ -1^{k}e^{-k}\end{pmatrix} \rightarrow \begin{pmatrix} 1\\ 0 \end{pmatrix} \)

But vector \(\begin{pmatrix} -1^{k}\\ \frac{1}{k}\end{pmatrix} \) does not converge.

Reader:what about convergence of this function, defined over \(0\leq t\leq T\)

pict

For pointwise convergence, at \(t=0,f\left ( 0\right ) =1\), and for \(0<t\leq \frac{1}{k}<T\), \(f\left ( t\right ) =1-kt\) hence the limit goes to \(1\) also. So this converges pointwise. Since \(\left \Vert f_{k}-f\right \Vert =\max _{0\leq t\leq T}\left \Vert f\left ( t\right ) \right \Vert =1,\) it does not converge uniformly. For uniform convergence, we need to have \(\left \Vert f_{k}-f\right \Vert \rightarrow 0\) as \(k\rightarrow \infty \).  In space of bounded functions, we always mean uniform convergence.

Summary: \(f_{k}\rightarrow f\) in \(B\) mean uniform convergence. But \(f_{k}\left ( t\right ) \rightarrow f^{\ast }\left ( t\right ) \) mean pointwise.

Reader: Show that uniform convergence implies pointwise convergence. Proof: \begin{align*} \left \Vert f_{k}-f\right \Vert _{I} & =\max _{t\in I}\left \Vert f_{k}\left ( t\right ) -f\left ( t\right ) \right \Vert \\ & \leq \left \Vert f_{k}\left ( t\right ) -f\left ( t\right ) \right \Vert \end{align*}

But if \(f_{k}\) converges uniformly to \(f\) then \(\left \Vert f_{k}-f\right \Vert _{I}\rightarrow 0\) as \(k\rightarrow \infty \), hence

\begin{align*} \lim _{k\rightarrow \infty }\left \Vert f_{k}\left ( t\right ) -f\left ( t\right ) \right \Vert & =0\\ \lim _{k\rightarrow \infty }f_{k}\left ( t\right ) & =f\left ( t\right ) \end{align*}

Therefore, \(f_{k}\left ( t\right ) \) converges to \(f\left ( t\right ) \) pointwise. QED.

HW4 assigned.