4.1 HW 1

  4.1.1 Problem 1
  4.1.2 Problem 2
  4.1.3 Problem 3
  4.1.4 Problem 4
  4.1.5 Problem 5
  4.1.6 HW 1 key solution
PDF (letter size)
PDF (legal size)

4.1.1 Problem 1

problem description

pict

solution

A multilinear function \(f\left ( x_{1},\cdots ,x_{n}\right ) \) is one which is linear with respect to each of its independent variables taken one at a time. In other words, when fixing all the independent variables except for one, then it reduces to a linear function in the one variable which is free to change. For an example, for \(n=2\), \[ \boxed{ f\left ( x,y\right ) =ax+by+cxy } \] is a multilinear function in \(x,y\). When fixing \(x\) to some specific value \(x_{0}\) in \(\Re \), the above becomes \begin{align*} \left . f\left ( x,y\right ) \right \vert _{x=x_{0}} & =ax_{0}+by+cx_{0}y\\ & =y\left ( b+cx_{0}\right ) +ax_{0}\\ & =Ay+B \end{align*}

Where all the constants \(a,b,c,x_{0}\) have been combined into \(A\) and \(B\). Similarly when fixing \(y=y_{0}\), then\[ \left . f\left ( x,y\right ) \right \vert _{y=y_{0}}=Cx+D \] A linear function has it extreme values at the start or at the end of its allowed range of values (The function can be either increasing or decreasing or a constant), this shows that \(f\left ( x_{1},\cdots ,x_{n}\right ) \) will have its extreme values at one end of the boundaries of each of its variables \(x_{1},\cdots ,x_{n}\).

To illustrate what was said so far, taking \(n=2\) and fixing \(x=x_{0}\), then the function will be \(\left . f\left ( x,y\right ) \right \vert _{x=x_{0}}\) and when fixing \(y=y_{0}\) the function will be \(\left . f\left ( x,y\right ) \right \vert _{y=y_{0}}\)

pict

To show that the extreme points must be at a ”corner” or a vertex, is now straight forward. From the above, the extreme value of the multilinear function must be on an edge. But on any edge, only one of the coordinates is free to change while all the others are fixed. Therefore on any edge of the hypercube (when in higher dimensions) the multilinear function is linear in only one of the parameters at an edge. Hence the function must be either increasing or decreasing on that edge (or be constant). Therefore the function extreme values on the edge is where the edge starts or ends, which is a vertex node.  This is illustrated by this diagram for the \(\Re ^{2}\)case.

pict

The following illustrates the case for \(\Re ^{3}\) by showing few edges and (the function value \(f\left ( x,y,z\right ) \) is hard to show here, since we would need fourth dimension).

pict

This process carry over to higher dimensions hypercube.

4.1.2 Problem 2

   4.1.2.1 Appendix

problem description

pict

solution

The function \(J\left ( u\right ) \) is defined on a cube. Since it is multilinear in \(u_{1},u_{2},u_{3}\), the minimizer point must be at one of the 8 vertices of the cube as shown in problem one. The optimization method the problem asks to perform does not converge to a minimizer \(u^{\ast }\) in general. And it does not in this problem. The value of \(J\) at each corner are1 as shown below

pict

In the first stage, we select between \(J_{1}\) and \(J_{2}\), (this is the result of fixing \(u_{2}=u_{3}=1\) and optimizing \(J\left ( u\right ) \) to find \(\hat{u}_{1}\)). We find that \(J_{2}\) wins, since it is smaller. Then we select between \(J_{2}\) and \(J_{7}\) and find that \(J_{2}\) still wins. Then we select between \(J_{2}\) and \(J_{4}\), and find that \(J_{2}\) also wins. So at the end of the first phase we mark \(J_{2}\) as the winner (the minimum so far).

Now we go back to \(J_{1}\) but select the second edge leaving this node, and select between \(J_{1}\) and \(J_{8}\), and find that \(J_{8}\) wins. Now we select between \(J_{8}\) and \(J_{7},\) and find that \(J_{8}\) wins. Then select between \(J_{8}\) and \(J_{5}\), and \(J_{8}\) still wins. This ends the second phase.

Between \(J_{8}=-3\) and \(J_{2}=-3\) (winner of phase one), there is a tie so far.

We go back to \(J_{1}\) (this is the final stage) and now take the third edge leaving this node, and select between \(J_{1}\) and \(J_{3}\). \(J_{3}\) wins. Then select between \(J_{3}\) and \(J_{4}\) and \(J_{3}\) wins. Then select between \(J_{3}\) and \(J_{5}\) and \(J_{3}=-3\) still is the winner.

From vertex \(J_{1}\) we have followed this algorithm over all the three edges leaving it, and found that overall minimum is \(J\left ( u\right ) =-3\). If we continue this process, it will just repeat all over.

But \(J\left ( u\right ) =-3\) is not the correct value for the global minimum, since the global minimum is at vertex \(J_{6}^{\ast }=-27\), which we never got the chance to compare with due to the nature of how this algorithm works. If, for example, \(J_{7}\) had been smaller than \(J_{2}\), say \(-4\), then we would had the chance to select \(J_{7}\) and then compare \(J_{6}\) with \(J_{7}\) to find that it is the overall minimum. So this algorithm is not guaranteed to converge to the global minimum in general.

Here is the decision tree used for the above process.

pict

We see that the global minimum at vertex \(J_{6}=-27\) was not visited. Wrong turn was taken in each stage.

4.1.2.1 Appendix

This appendix can be skipped. It shows the basic calculations for the first phase for illustration, and Matlab code used. Starting at \(\left ( 1,1,1\right ) \), and fixing \(u_{2}=u_{3}=1\).

pict

Hence on the first edge above, we have\[ J\left ( u_{1},u_{2},u_{3}\right ) =8u_{1}u_{2}u_{3}-4u_{1}u_{2}-4u_{1}u_{3}-4u_{2}u_{3}+2u_{1}+2u_{2}+2u_{3}-1 \] Fixing \(u_{2}=u_{3}=1\) gives\begin{align*} J\left ( u_{1},1,1\right ) & =8u_{1}-4u_{1}-4u_{1}-4+2u_{1}+2+2-1\\ & =2u_{1}-1 \end{align*}

This is minimum when \(u_{1}=-1\). Hence \(\hat{u}_{1}=-1\). Now we see that on the above edge, \(J\) is smaller on vertex \(\left ( -1,1,1\right ) \) than on \(\left ( 1,1,1\right ) \). Now we look at the next edge, where \(u_{1}=-1\) and \(u_{3}=1\)

pict

Fixing \(u_{3}\) at \(1\) and \(u_{1}=\hat{u}_{1}=-1\) then\begin{align*} J\left ( \hat{u}_{1},u_{2},u_{3}\right ) & =J\left ( -1,u_{2},1\right ) \\ & =8\hat{u}_{1}u_{2}u_{3}-4\hat{u}_{1}u_{2}-4\hat{u}_{1}u_{3}-4u_{2}u_{3}+2\hat{u}_{1}+2u_{2}+2u_{3}-1\\ & =-8u_{2}+4u_{2}+4-4u_{2}-2+2u_{2}+2-1\\ & =3-6u_{2} \end{align*}

Hence \(J\left ( \hat{u}_{1},u_{2},1\right ) \) is minimum when \(u_{2}=1\). Therefore \(\hat{u}_{2}=1\). This tells us that \(J\) at vertex \(\left ( -1,1,1\right ) \) is smaller than on \(\left ( -1,1,-1\right ) \). Traveling down the edge between \(\left ( -1,1,1\right ) \) and \(\left ( -1,1,-1\right ) \), which is done by fixing \(u_{2},u_{1}\) and changing \(u_{3}\) gives

pict

Now we need to find \(\hat{u}_{3}\)\begin{align*} J\left ( \hat{u}_{1},\hat{u}_{2},u_{3}\right ) & =J\left ( -1,1,u_{3}\right ) \\ & =8\hat{u}_{1}\hat{u}_{2}u_{3}-4\hat{u}_{1}\hat{u}_{2}-4\hat{u}_{1}u_{3}-4\hat{u}_{2}u_{3}+2\hat{u}_{1}+2\hat{u}_{2}+2u_{3}-1\\ & =-8u_{3}+4+4u_{3}-4u_{3}-2+2+2u_{3}-1\\ & =3-6u_{3} \end{align*}

This is minimum at \(u_{3}=1,\) Therefore \(\hat{u}_{3}=1\). This means that \(J\) is still smallest at vertex \(\left ( -1,1,1\right ) \). We have so far visited 3 edges, and looked at 4 vertices and found that \(J\) is smallest at \(\left ( -1,1,1\right ) \).\begin{align*} J\left ( -1,1,1\right ) & =8u_{1}u_{2}u_{3}-4u_{1}u_{2}-4u_{1}u_{3}-4u_{2}u_{3}+2u_{1}+2u_{2}+2u_{3}-1\\ & =-8+4+4-4-2+2+2-1\\ & =-3 \end{align*}

Here is a diagram to illustrate what we did so far

pict

We now repeat the process starting from \(\left ( 1,1,1\right ) \). Fixing \(u_{1}=1\), \(u_{3}=1\), but vary \(u_{2}\). The calculation is similar to the above, and will not be shown. A small Matlab script is given below that was used to verify the results.

Output of the above is

 
>> nma_HW1_problem2_ECE719 
[ -27.0, 9.0, 9.0, -3.0, 9.0, -3.0, -3.0, 1.0] 
u1Hat = 
-1 
u2Hat = 
1 
u3Hat = 
1

4.1.3 Problem 3

problem description

pict

solution

Since \(N\left ( u\right ) \) is multilinear, its maximum and minimum values will be on a vertex. Similarly for \(D\left ( u\right ) \). Therefore, we only need to compare the ratios \(\frac{N\left ( u\right ) }{D\left ( u\right ) }\) on the vertices to find the largest ratio. For example, for \(n=2\), we look at the four ratios\(,\frac{N_{1}}{D_{1}},\frac{N_{2}}{D_{2}},\frac{N_{3}}{D_{3}},\frac{N_{4}}{D_{4}}\). Where \(N_{i}\) means the value of \(N\left ( u\right ) \) at vertex \(i\), and similarly for \(D_{i}\).

Since one of these \(N_{i}\) will be the largest value that \(N\left ( u\right ) \) can take, and one of these \(D_{i}\) values will be the smallest \(D\left ( u\right ) \) can take, then the maximum of \(J\left ( u\right ) =\frac{N\left ( u\right ) }{D\left ( u\right ) }\) will be one of these four values.

It can not be at any other \(u_{i}\) location. Proof by contradiction: Let us assume there is a point somewhere else in the domain of \(J\left ( u\right ) \), say an internal point \(u_{i}\) where \(\frac{N_{i}}{D_{i}}\) was the largest. This would imply that \(N_{i}\) is so large as to make \(\frac{N_{i}}{D_{i}}\) larger than any value at the vertices regardless of what \(D_{i}\) happened to be at \(u_{i}\), which means that \(N_{i}\) is the maximum of \(N\left ( u\right ) \), but this is not possible since the maximum of \(N\left ( u\right ) \) must be at a vertex.

Or it could mean that \(D_{i}\) is so small such that \(\frac{N_{i}}{D_{i}}\) is larger than any value at the vertices regardless of what \(N_{i}\) happened to be, which means that \(D_{i}\) is the minimum of \(D\left ( u\right ) \), but this is also not possible, since the minimum of \(N\left ( u\right ) \) is at a vertex. Therefore the maximum of \(J\left ( u\right ) \) must be at a vertex and can not be at any other point.

4.1.4 Problem 4

problem description

pict

solution

pict

The total network resistance (Input impedance) is\[ Z_{in}=R_{1}+R_{2}+R_{3}\Vert \left ( R_{4}+R_{5}+R_{6}\Vert \left ( R_{7}+R_{9}+R_{8}\right ) \right ) \] Using \(X\Vert Y=\frac{XY}{X+Y}\) since in parallel the above become\begin{align*} Z_{in} & =R_{1}+R_{2}+\frac{R_{3}\left ( R_{4}+R_{5}+R_{6}\Vert \left ( R_{7}+R_{9}+R_{8}\right ) \right ) }{R_{3}+\left ( R_{4}+R_{5}+R_{6}\Vert \left ( R_{7}+R_{9}+R_{8}\right ) \right ) }\\ & =R_{1}+R_{2}+\frac{R_{3}\left ( R_{4}+R_{5}+\frac{R_{6}\left ( R_{7}+R_{9}+R_{8}\right ) }{R_{6}+\left ( R_{7}+R_{9}+R_{8}\right ) }\right ) }{R_{3}+\left ( R_{4}+R_{5}+\frac{R_{6}\left ( R_{7}+R_{9}+R_{8}\right ) }{R_{6}+\left ( R_{7}+R_{9}+R_{8}\right ) }\right ) } \end{align*}

The above is the overall ladder network resistance. Let \(X=R_{4}+R_{5}+\frac{R_{6}\left ( R_{7}+R_{9}+R_{8}\right ) }{R_{6}+\left ( R_{7}+R_{9}+R_{8}\right ) }\) to simplify the equation\[ Z_{in}=R_{1}+R_{2}+\frac{XR_{3}}{R_{3}+X}\] The output impedance is\begin{align*} Z_{out} & =R_{3}\Vert \left ( R_{4}+R_{5}+R_{6}\Vert \left ( R_{7}+R_{9}+R_{8}\right ) \right ) \\ & =\frac{XR_{3}}{R_{3}+X} \end{align*}

Hence \(V_{out}\) is now found, using \(V_{in}=1\), from \begin{align*} \frac{V_{out}}{V_{in}} & =\frac{Z_{out}}{Z_{in}}\\ V_{out} & =\frac{\frac{XR_{3}}{R_{3}+X}}{R_{1}+R_{2}+\frac{XR_{3}}{R_{3}+X}}=\frac{XR_{3}}{R_{1}\left ( R_{3}+X\right ) +R_{2}\left ( R_{3}+X\right ) +XR_{3}}\\ & =\frac{XR_{3}}{X\left ( R_{1}+R_{2}+R_{3}\right ) +R_{1}R_{3}+R_{2}R_{3}} \end{align*}

Since \(V_{out}\) multilinear function in \(R_{i},i=1\cdots 9\), the maximum and minimum will occur at the end range values of each resistance, which is \(90\) and \(110.\) so there are \(2^{9}\) different cases to check. A small script is below which calculate these vertex values and shows the maximum \(V_{out}\) found. The maximum is\[ V_{out_{\max }}=0.3147\text{ volt}\] Using \(R_{1}\) and \(R_{2}\) at \(90\) ohm, and the rest of the resistors using \(110\) ohm.

Which generates when run:

 
>> nma_HW1_problem4_ECE719 
max is 0.314732 at U= 
90    90   110   110   110   110   110   110   110 
min is 0.225627 at U= 
110   110    90    90    90    90    90    90    90

4.1.5 Problem 5

   4.1.5.1 Part(a)
   4.1.5.2 Part(b)
   4.1.5.3 Part(c)

problem description

pict

solution

4.1.5.1 Part(a)

Since \(u_{k}^{\ast }\) minimizes \(f\left ( u\right ) \) when all \(u_{i}\) are fixed except for \(u_{k}\), then this is the same as saying that solving for \(\frac{\partial f}{\partial u_{k}}=0\) gives \(u_{k}^{\ast }\). Since this is the necessary condition for an extreme point from unconstrained calculus (we still need to check the Hessian for \(u_{k}^{\ast }\) being the minimum or the maximum, but we are told here it is a minimizer).

But \(\frac{\partial f}{\partial u_{k}}\) is partial derivative of \(f\left ( u\right ) \) w.r.t. to \(u_{k}\) when all other \(u_{i}\) are fixed (by definition). For \(\Re ^{n}\) this carries over and becomes the gradient. Therefore\begin{align*} \nabla f & =0\\\begin{pmatrix} \frac{\partial f}{\partial u_{1}}\\ \frac{\partial f}{\partial u_{2}}\\ \vdots \\ \frac{\partial f}{\partial u_{n}}\end{pmatrix} & =0 \end{align*}

Leads to minimum being at \(\begin{pmatrix} u_{1}^{\ast }\\ u_{2}^{\ast }\\ \vdots \\ u_{n}^{\ast }\end{pmatrix} \) since we are told that each \(\frac{\partial f}{\partial u_{k}}=0\) results in \(u_{k}^{\ast }\) as the solution.

4.1.5.2 Part(b)

An example where the above applies is \(J\left ( x,y\right ) =xy\) on \(U=\left [ 0,1\right ] \) Since\begin{align*} \nabla f & =0\\\begin{pmatrix} \frac{\partial f}{\partial x}\\ \frac{\partial f}{\partial y}\end{pmatrix} & =\begin{pmatrix} y\\ x \end{pmatrix} =\begin{pmatrix} 0\\ 0 \end{pmatrix} \end{align*}

Hence a minimizer is \(u^{\ast }=\begin{pmatrix} x=0\\ y=0 \end{pmatrix} \) and \(J\left ( u^{\ast }\right ) =0\) is the global minimum.

4.1.5.3 Part(c)

An example where part (a) does not apply is \(J\left ( x,y\right ) =xy\) on \(U=\left [ -1,1\right ] \). Now \(u^{\ast }=\begin{pmatrix} 0\\ 0 \end{pmatrix} \) is not the minimizer, since if \(x=-1\) and \(y=1\) or if \(x=1\) and \(y=-1\), we find \(J\left ( u^{\ast }\right ) =-1<0\).

The following is a plot of \(J(x,y)=xy\) of different sets of constraints to illustrate part(b) and (c).

pict

pict

4.1.6 HW 1 key solution

pict
pict
pict
pict
pict
pict