4.5 HW 5

  4.5.1 Problem 1
  4.5.2 Problem 2
  4.5.3 HW 5 key solution
PDF (letter size)
PDF (legal size)

4.5.1 Problem 1

   4.5.1.1 part(a)
   4.5.1.2 part b
   4.5.1.3 part(c)
   4.5.1.4 Appendix. Source code for problem 1

pict
Figure 4.39:problem 1 description
4.5.1.1 part(a)

Let \(f_{i}\left ( x\right ) :\Re ^{n}\rightarrow \Re \). We want to solve

\begin{equation} f_{i}\left ( \mathbf{x}\right ) =0\qquad i=1,2,\cdots N \tag{1} \end{equation}

Which means finding \(\mathbf{x}^{\ast }\) which makes value of \(f_{i}\left ( \mathbf{x}^{\ast }\right ) \) be zero. If we consider the vector \(F\left ( \mathbf{x}\right ) \) of functions \(f_{i}\left ( \mathbf{x}\right ) \)

\[ F\left ( \mathbf{x}\right ) =\begin{pmatrix} f_{1}\left ( \mathbf{x}\right ) \\ f_{2}\left ( \mathbf{x}\right ) \\ \vdots \\ f_{N}\left ( \mathbf{x}\right ) \end{pmatrix} \]

Then the square of the Euclidean norm of \(F\left ( \mathbf{x}\right ) \) is

\[ \left \Vert F\left ( \mathbf{x}\right ) \right \Vert ^{2}=\sum _{i=1}^{N}f_{i}^{2}\left ( \mathbf{x}\right ) \]

The minimum value of \(\left \Vert F\left ( \mathbf{x}\right ) \right \Vert \) is zero since it is a norm. Which is the same as \(\left \Vert F\left ( \mathbf{x}\right ) \right \Vert ^{2}=0\). This means \(F\left ( \mathbf{x}\right ) =0\) occurs when \(\left \Vert F\left ( \mathbf{x}\right ) \right \Vert ^{2}=0\). So the solution to (1) is the same \(\mathbf{x}^{\ast }\) as finding the minimizer \(\mathbf{x}^{\ast }\) which makes \(\left \Vert F\left ( \mathbf{x}\right ) \right \Vert ^{2}\) minimum.

Therefore minimizing \(J\left ( x\right ) =\left \Vert F\left ( \mathbf{x}\right ) \right \Vert ^{2}=\sum _{i=1}^{N}f_{i}^{2}\left ( \mathbf{x}\right ) \) will give the solution to (1). This is similar to finding least squares solution to set of linear equations, except now the set of equations \(F\left ( \mathbf{x}\right ) \) are non-linear in \(\mathbf{x}\).

4.5.1.2 part b

\begin{align*} f_{1}\left ( x\right ) & =x_{2}-x_{1}^{3}+5x_{1}^{2}-2x_{1}-13\\ f_{2}\left ( x\right ) & =x_{2}+x_{1}^{3}+x_{1}^{2}-14x_{1}-29 \end{align*}

Hence\begin{align} J\left ( x\right ) & =f_{1}^{2}\left ( x\right ) +f_{2}^{2}\left ( x\right ) \nonumber \\ & =\left ( x_{2}-x_{1}^{3}+5x_{1}^{2}-2x_{1}-13\right ) ^{2}+\left ( x_{2}+x_{1}^{3}+x_{1}^{2}-14x_{1}-29\right ) ^{2}\nonumber \\ & =2x_{1}^{6}-8x_{1}^{5}+2x_{1}^{4}-80x_{1}^{3}+12x_{1}^{2}x_{2}+12x_{1}^{2}-32x_{1}x_{2}+864x_{1}+2x_{2}^{2}-84x_{2}+1010 \tag{1} \end{align}

\(J\left ( x\right ) \) is non-linear function. The above is the \(\left \Vert F\left ( \mathbf{x}\right ) \right \Vert ^{2}\) where now \(F\left ( \mathbf{x}\right ) =\begin{pmatrix} f_{1}\left ( \mathbf{x}\right ) \\ f_{2}\left ( \mathbf{x}\right ) \end{pmatrix} \). We will use optimization to find the solution \(\mathbf{x}\) to \(F\left ( \mathbf{x}\right ) =0\) by finding the minimizer of (1). The solution will turn out to be \[ \mathbf{x}^{\ast }=\left ( x_{1}=4,x_{2}=5\right ) \] At this point \(J\left ( x^{\ast }\right ) =0\) and also \(f_{1}\left ( x^{\ast }\right ) =0\) and \(f_{2}\left ( x^{\ast }\right ) =0.\) So the above is the true solution to \(f_{i}\left ( x\right ) =0\).  But there is also another local minimum close to it located at\[ \mathbf{x}=\left ( x_{1}=-0.8968,x_{2}=11.4128\right ) \] where here \(J\left ( x\right ) =48.98\) and not zero. At this second local minimum, the corresponding values for \(f_{i}\) are \(f_{1}\left ( \mathbf{x}\right ) =4.949\) and \(f_{2}\left ( \mathbf{x}\right ) =-4.949\). These were found by running the conjugate gradient algorithm with Polyak-Ribiere stepping as given below.

The following is contour plot of the full range given in the problem statement, showing there are two local minimums, one around point \(x_{1}=4.5,x_{2}=8\) and another around \(x_{1}=-1.3,x_{2}=10\)

pict
Figure 4.40:contour plot, full range

This is zoomed version of the above to show more clearly the area around the variations

pict
Figure 4.41:contour plot, zoomed version

This is filled contour version of the above.

pict
Figure 4.42:contour plot, filled zoomed version

This is 3D plot of the function \(J(x)\)

pict
Figure 4.43:contour plot, filled zoomed version
4.5.1.3 part(c)

A Matlab program is given in the appendix which implements Polyalk-Ribiere (and it also supports Fletcher-Reeves). The result is given below, with discussion following each result. One result also shows an interesting difference found between Polyalk-Ribiere and Fletcher-Reeves when starting from some random found \(u^{0}\) point. In all runs, Matlab fminsearch was also used to compare the minimizer found. In some cases, this algorithm found the same minimum as Matlab’s fminsearch, and in other cases it did not.

The result shows each point \(u^{k}\) visited with the value of \(\beta _{k}\) and \(\alpha _{k}\) found, and the value of the objective function and the gradient at each step.

For the line search, golden section search was used to find \(\alpha _{k}\) with maximum step size \(H_{\max }=1\). The stopping criteria used for all these runs is \(\left \vert \nabla J\left ( u\right ) \right \vert \leq 0.001\).

The algorithm The following is the outline of general algorithm expressed as pseudo code.

pict
Figure 4.44:Conjugate gradient using Polyalk-Ribiere or Fletcher-Reeves

Test 1 starting from \(\left ( -5.49,23.05\right ) \)

pict
Figure 4.45:test case 1, problem 1, part c
Matlab fminsearch found \(J(-0.8968,11.4128)=48.9843\). This program found \(J(-0.8968,11.4128)=48.9843\). since \(u^{\ast }=(4,5)\) we see that the search did not find \(u^{\ast }\) but found the other local minimum near it, since the search was started from a point closer to the second one. Also we see Matlab fminsearch result matched our result. So this is good. It took \(8\) steps. We also notice that \(\nabla J\left ( x^{k}\right ) \) increased at one step (step 3) during the search. This is indication that this is not a quadratic function (which we already know this), but \(\nabla J\left ( x^{k}\right ) \) started to decrease again after that.







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( -5.59,23.05\right ) \) \(129231.19\) \(116683.427\) \(0.000046\) \(-0.000003\)






\(2\) \(\left ( -0.20,23.028\right ) \) \(122.99\) \(15.12\) \(0.273\) \(86.62\)






\(3\) \(\left ( -0.2227,18.9\right ) \) \(91.83\) \(140.56\) \(0.003958\) \(0.3535\)






\(4\) \(\left ( -0.8,13.72\right ) \) \(52.15\) \(39.059\) \(0.00394\) \(0.4082\)






\(5\) \(\left ( -0.855,11.885\right ) \) \(49.16\) \(12.1855\) \(0.00236\) \(0.05\)






\(6\) \(\left ( -0.896,11.436\right ) \) \(48.98\) \(0.583\) \(0.00238\) \(0.0094\)






\(7\) \(\left ( -0.8968,11.4129\right ) \) \(48.98\) \(0.00545\) \(0.002095\) \(0.00123\)






\(8\) \(\left ( -0.8968,11.4128\right ) \) \(48.98\) \(0.000007\) \(0.000000\) \(0.000000\)






Test 2 starting from \(\left ( 5.8,35.89\right ) \)

pict
Figure 4.46:test case 2, problem 1, part c
Matlab fminsearch found \(J(-0.8968,11.4128)=48.9843\). This program found \(J(-0.8968,11.4128)=48.9843\). since \(u^{\ast }=(4,5)\) we see again that the search did not find \(u^{\ast }\) but found the other local minimum. Also we see Matlab fminsearch result matched our result. So this is good. It took \(9\) steps. We also notice that \(\nabla J\left ( x^{k}\right ) \) increased at one step (step 3) during the search.







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 5.806,35.895\right ) \) \(24303.88\) \(32066.72\) \(0.000170\) \(0.000011\)






\(2\) \(\left ( 0.353,35.847\right ) \) \(520.53\) \(49.582\) \(0.2299\) \(36.96\)






\(3\) \(\left ( 0.435,24.448\right ) \) \(238.41\) \(301.265\) \(0.00356\) \(0.28\)






\(4\) \(\left ( -0.59,17.92\right ) \) \(71.95\) \(69.317\) \(0.0079\) \(1.3363\)






\(5\) \(\left ( -0.686,13.789\right ) \) \(53.18\) \(52.828\) \(0.0028\) \(0.1788\)






\(6\) \(\left ( -0.883,11.7991\right ) \) \(49.08\) \(8.1969\) \(0.00295\) \(0.06401\)






\(7\) \(\left ( -0.895,11.4284\right ) \) \(48.98\) \(0.4961\) \(0.00193\) \(0.00629\)






\(8\) \(\left ( -0.896801,11.412913\right ) \) \(48.98\) \(0.003106\) \(0.002634\) \(0.000101\)






\(9\) \(\left ( -0.896805,11.412779\right ) \) \(48.98\) \(0.000000\) \(0.000000\) \(0.000000\)






Test 3 starting from \(\left ( 5.59,-19.55\right ) \)

pict
Figure 4.47:test case 3, problem 1, part c
Matlab fminsearch found the true minimum \(J(4,5)=0\). This program did not do as well, and went for the second local minimum at \(J(-0.8968,11.4128)=48.9843\), which has the corresponding solution \(f_{1}(x)=4.9490,f_{2}(x)=-4.9490\).

One surprising thing to note, is that Matlab fminsearch uses simplex method according to the help. But this problem is not linear. It turns out that Matlab fminsearch uses a modified version of simplex method, called the Nelder-Mead simplex (direct search). It seems to do better than algorithm implemented in this problem. But in the next test case, we will see that this algorithm evens the score with Matlab’s and in the next test case it is we who will do better.

It took \(9\) steps. Again as before, \(\nabla J\left ( x^{k}\right ) \) increased at one step during the search (at step 3 also).







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 5.5914,-19.553\right ) \) \(10150.82\) \(19380.2\) \(0.000397\) \(-0.000023\)






\(2\) \(\left ( -2.107,-19.566\right ) \) \(585.39\) \(41.5373\) \(0.2127\) \(423.717\)






\(3\) \(\left ( -2.142,-10.731\right ) \) \(401.83\) \(854.79\) \(0.001138\) \(-0.213\)






\(4\) \(\left ( -1.247,9.303\right ) \) \(79.34\) \(264.025\) \(0.000619\) \(-0.14389\)






\(5\) \(\left ( -1.1875,6.974\right ) \) \(57.85\) \(46.1832\) \(0.00822\) \(-0.2476\)






\(6\) \(\left ( -0.9218,11.436\right ) \) \(49.30\) \(24.1288\) \(0.001065\) \(-0.02258\)






\(7\) \(\left ( -0.9046,11.29\right ) \) \(48.99\) \(0.56707\) \(0.0389\) \(-0.0926\)






\(8\) \(\left ( -0.8969,11.4131\right ) \) \(48.98\) \(0.05981\) \(0.001129\) \(-0.000415\)






\(9\) \(\left ( -0.896806,11.412772\right ) \) \(48.98\) \(0.000025\) \(0.000000\) \(0.000000\)






Test 4 starting from \(\left ( 7.43472,16.05058\right ) \)

pict
Figure 4.48:test case 4, problem 1, part c
Matlab fminsearch here did not find the true minimum \(J(4,5)=0\) while this algorithm did.

It took \(6\) steps only. Again as before, \(\nabla J\left ( x^{k}\right ) \) increased at one step during the search (at step 2).







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 7.435,16.05\right ) \) \(143368.39\) \(143787.679\) \(0.000025\) \(0.000002\)






\(2\) \(\left ( 3.78,16.04\right ) \) \(172.57\) \(30.799\) \(0.2696\) \(200.218\)






\(3\) \(\left ( 3.84,7.74\right ) \) \(44.74\) \(435.9738\) \(0.000433\) \(0.00797\)






\(4\) \(\left ( 3.999778,5.066770\right ) \) \(0.01\) \(3.455556\) \(0.001349\) \(0.009287\)






\(5\) \(\left ( 3.999989,5.000154\right ) \) \(0.00\) \(0.031875\) \(0.000336\) \(-0.000038\)






\(6\) \(\left ( 4.000000,5.000000\right ) \) \(0.00\) \(0.000001\) \(0.000000\) \(0.000000\)






Test 5 starting from \(\left ( 3.809,-8.46\right ) \)

pict
Figure 4.49:test case 5, problem 1, part c
Here both Matlab fminsearch and this algorithm, found the true minimum.

It took \(6\) steps only. Again as before, \(\nabla J\left ( x^{k}\right ) \) increased at one step during the search (at step 3).







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 3.809524,-8.463035\right ) \) \(580.29\) \(1386.28\) \(0.000285\) \(0.000784\)






\(2\) \(\left ( 4.204487,-8.444322\right ) \) \(267.86\) \(40.2297\) \(0.33335\) \(14.037757\)






\(3\) \(\left ( 3.958554,4.969723\right ) \) \(3.20\) \(150.186235\) \(0.000278\) \(-0.009224\)






\(4\) \(\left ( 3.997429,5.127561\right ) \) \(0.02\) \(1.447732\) \(0.022782\) \(0.258685\)






\(5\) \(\left ( 4.000078,5.000400\right ) \) \(0.00\) \(0.316225\) \(0.000273\) \(0.000175\)






\(6\) \(\left ( 4.000000,5.000004\right ) \) \(0.00\) \(0.000057\) \(0.000000\) \(0.000000\)






Test 6 (first strange one) starting from \(\left ( 6.63594,-14.29961\right ) \) This test case and the second one are pathological cases, in the sense that this algorithm did find the true minimum, but the path taken headed first to the second local minimum and was very close to it, before turning and going to the true minimum at \((4,-5)\). At this time, I am not able to explain this and more time needed to investigate. It does however find the true minimum eventually, so this is good result even if the path taken looks very strange compared to all the other tests above. The main difference between this test case and the last ones, is that here the objective function \(J\left ( x\right ) \) increased at one point during the search (at step 6 as shown below).

pict
Figure 4.50:test case 6, Polyak-Ribiere, problem 1, part c

Here both Matlab fminsearch and this algorithm, found the true minimum.

It took \(14\) steps. Here \(\nabla J\left ( x^{k}\right ) \) increased and decreased more than one time during the search.







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 6.63594,-14.29961\right ) \) \(52701.77\) \(67823.57\) \(0.000127\) \(0.000002\)






\(2\) \(\left ( -1.970382,-14.321801\right ) \) \(393.95\) \(31.646\) \(0.221630\) \(385.651\)






\(3\) \(\left ( -1.991739,-7.308131\right ) \) \(282.95\) \(621.5254\) \(0.001424\) \(-0.217509\)






\(4\) \(\left ( -1.159618,10.073263\right ) \) \(67.36\) \(199.4501\) \(0.000696\) \(-0.132737\)






\(5\) \(\left ( -1.109423,8.218728\right ) \) \(53.56\) \(31.5552\) \(0.009140\) \(-0.241872\)






\(6\) \(\left ( -0.908598,11.459289\right ) \) \(49.08\) \(13.239\) \(0.662964\) \(22576.86\)






\(7\) \(\left ( 4.328897,-45.933286\right ) \) \(4299.56\) \(1995.887\) \(0.000000\) \(-0.009991\)






\(8\) \(\left ( 4.333267,-45.980644\right ) \) \(4299.50\) \(1975.7426\) \(0.001732\) \(3.309271\)






\(9\) \(\left ( 4.620182,-11.840013\right ) \) \(883.28\) \(2743.2211\) \(0.000271\) \(-0.051462\)






\(10\) \(\left ( 4.024522,5.862406\right ) \) \(3.99\) \(149.444\) \(0.000338\) \(-0.154415\)






\(11\) \(\left ( 4.012227,4.726667\right ) \) \(0.22\) \(28.564\) \(0.000532\) \(-0.010341\)






\(12\) \(\left ( 4.000032,5.002778\right ) \) \(0.00\) \(0.299\) \(0.000518\) \(-0.004453\)






\(13\) \(\left ( 4.000001,4.999987\right ) \) \(0.00\) \(0.00134\) \(0.000550\) \(0.001374\)






\(14\) \(\left ( 4.000000,5.000000\right ) \) \(0.00\) \(0.000002\) \(0.000000\) \(0.000000\)






The above was re-run again, starting from the same \(u^{0}\), but now using Fletcher-Reeves formula. The result was surprising. Now the algorithm did not show the strange path as above, however, it also did not find the true minimum at \(\left ( 4,-5\right ) \) and instead went for the second local minimum as shown below. This shows, at least in this test, that Polyak-Ribiere formula did a better job, even though it took more steps.

pict
Figure 4.51:test case 6, using Fletcher-Reeves, problem 1, part c







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 6.63594,-14.29961\right ) \) \(52701.77\) \(67823.57\) \(0.000127\) \(0.000000\)






\(2\) \(\left ( -1.970382,-14.321801\right ) \) \(393.95\) \(31.646\) \(0.2519\) \(393.1282\)






\(3\) \(\left ( -1.968895,-6.351520\right ) \) \(267.83\) \(627.462\) \(0.001362\) \(0.07595\)






\(4\) \(\left ( -1.111164,10.592228\right ) \) \(63.10\) \(172.916\) \(0.001001\) \(0.000010\)






\(5\) \(\left ( -0.890514,11.528839\right ) \) \(48.99\) \(0.5567\) \(0.001137\) \(0.03014\)






\(6\) \(\left ( -0.889896,11.528704\right ) \) \(48.99\) \(0.0967\) \(0.888831\) \(202.77\)






\(7\) \(\left ( -0.893567,11.441590\right ) \) \(48.99\) \(1.3763\) \(0.001469\) \(0.000012\)






\(8\) \(\left ( -0.896818,11.412476\right ) \) \(48.98\) \(0.00481\) \(0.001101\) \(0.002681\)






\(9\) \(\left ( -0.896823,11.412476\right ) \) \(48.98\) \(0.000249\) \(0.000000\) \(0.000000\)






Test 7 (second strange one) starting from \(\left ( 0.5837,-46.595\right ) \) This test case also showed difference between Polyak-Ribiere and Fletcher-Reeves.

With Polyak-Ribiere, it found the same minimum as Matlab fminsearch using a strange path where \(J(u)\) did increase at one point before decreasing again.

However, it did a better job than Fletcher-Reeves.

pict
Figure 4.52:test case 7, Polyak-Ribiere, problem 1, part c

Here both Matlab fminsearch and this algorithm did not find the true minimum.

It took \(13\) steps. Here \(\nabla J\left ( x^{k}\right ) \) increased and decreased more than one time during the search. Also \(J(u)\) increased during the search before decreasing again.







\(k\) \(x^{k}\) \(J\left ( x^{k}\right ) \) \(\left \vert \nabla J\left ( x^{k}\right ) \right \vert \) \(\alpha _{k}\) \(\beta _{k}\)






\(1\) \(\left ( 0.583720,-46.595330\right ) \) \(10438.38\) \(1656.968\) \(0.001966\) \(0.003862\)






\(2\) \(\left ( -2.624791,-46.035172\right ) \) \(2450.06\) \(103.007\) \(0.564906\) \(2.002\)






\(3\) \(\left ( 3.819328,11.909200\right ) \) \(72.20\) \(148.967\) \(0.000257\) \(-0.1194\)






\(4\) \(\left ( 3.863288,11.957784\right ) \) \(69.31\) \(28.774\) \(0.163101\) \(7.727\)






\(5\) \(\left ( 4.016286,5.132002\right ) \) \(0.67\) \(70.2157\) \(0.098572\) \(18.327\)






\(6\) \(\left ( -2.188772,-26.898544\right ) \) \(959.12\) \(336.852\) \(0.000017\) \(-0.183\)






\(7\) \(\left ( -2.213801,-26.997880\right ) \) \(958.16\) \(255.113\) \(0.025963\) \(3.785\)






\(8\) \(\left ( -1.599015,2.551447\right ) \) \(123.88\) \(387.315\) \(0.001099\) \(0.179\)






\(9\) \(\left ( -1.074840,7.277357\right ) \) \(57.59\) \(60.1637\) \(0.004433\) \(0.517\)






\(10\) \(\left ( -0.961876,10.715217\right ) \) \(49.45\) \(22.635\) \(0.001854\) \(-0.0502\)






\(11\) \(\left ( -0.895537,11.456508\right ) \) \(48.99\) \(1.2014\) \(0.002193\) \(-0.01161\)






\(12\) \(\left ( -0.896851,11.412270\right ) \) \(48.98\) \(0.014138\) \(0.002174\) \(0.000948\)






\(13\) \(\left ( -0.896805,11.412779\right ) \) \(48.98\) \(0.000013\) \(0.000000\) \(0.000000\)






4.5.1.4 Appendix. Source code for problem 1

4.5.2 Problem 2

   4.5.2.1 part a
   4.5.2.2 Part b
   4.5.2.3 Source code for problem 2

pict
Figure 4.53:problem 2 description
4.5.2.1 part a

We need to minimize the cost of \(48A+165B+150C\) by finding the optimal mix (quantities) of \(A,B,C\) obtained from Gaines and Kennel supply as shown in the following diagram

pict
Figure 4.54:problem 2, part a

Let the amount (in grams) from Gaines be \(u_{1}\) and let the amount of grams from Kennel be \(u_{2}\). Therefore, we need to minimize\begin{equation} J\left ( u\right ) =\left ( 0.0012\right ) u_{1}+\left ( 0.001\right ) u_{2}\tag{1} \end{equation} Since this is the cost. Since there are \(8\) units of \(A\) in each gram from Gaines, and there are \(3\) units of \(A\) in each gram from Kennel, then we have the first restriction which is\[ 8u_{1}+3u_{2}\geq 48 \] Similarly, we find for \(B\) and \(C\) the following\begin{align*} 11u_{1}+15u_{2} & \geq 165\\ 25u_{1}+6u_{2} & \geq 150 \end{align*}

Convert to equality, and now use \(x\) instead of \(u\) since now we are converting to standard form\[ 8x_{1}+3x_{2}-x_{3}=48 \] Similarly, we find for \(B\) and \(C\) the following\begin{align*} 11x_{1}+15x_{2}-x_{4} & =165\\ 25x_{1}+6x_{2}-x_{5} & =150 \end{align*}

Now we write the above in the standard form\begin{align*} & \min c^{T}x\\ Ax & =b \end{align*}

Or\begin{align*} & \min \begin{bmatrix} 0.0012 & 0.001 & 0 & 0 & 0 \end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\\ x_{5}\end{bmatrix} \\\begin{bmatrix} 8 & 3 & -1 & 0 & 0\\ 11 & 15 & 0 & -1 & 0\\ 25 & 6 & 0 & 0 & -1 \end{bmatrix}\begin{bmatrix} x_{1}\\ x_{2}\\ x_{3}\\ x_{4}\\ x_{5}\end{bmatrix} & =\begin{bmatrix} 48\\ 165\\ 150 \end{bmatrix} \end{align*}

A graphical solution was found by plotting the three constraints. Since the extreme point must be at a vertex of the feasible region, we see that it is at \(u^{\ast }=(4,8)\) which is confirmed using Matlab LP in the second part.

pict
Figure 4.55:Graphical solution

Now contour lines are added to the above plot. Since the objective function is linear, the contour will be straight line, showing how \(J(u)\) increases. The smallest value of \(J(u)\) level set line which touches the first vertex of the feasible region will be the optimal point. Here is the result of the above plot, with contour lines added:

pict
Figure 4.56:Graphical solution with contour lines added
4.5.2.2 Part b

Matlab linprog was used to solve the above to find \(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6}\). Here is the result for \(x^{\ast }\)\begin{align*} x_{1} & =4.0777\\ x_{2} & =8.0097\\ x_{3} & =8.6505\\ x_{4} & =0\\ x_{5} & =0 \end{align*}

Mapping this back to \(u\), we see that \(u_{1}=x_{1}\) and \(u_{2}=x_{2}\). Hence the minimum cost in dollars is from (1)\begin{align*} J(\mathbf{u}) & =\left ( 0.0012\right ) u_{1}+\left ( 0.001\right ) u_{2}\\ & =\left ( 0.0012\right ) 4.0777+\left ( 0.001\right ) 8.0097\\ & =0.01290\,3 \end{align*}

The above is the cost of \(4\) grams from Gaines and \(8\) grams from Kennel. The above basically says to buy twice as much from Kennel as from Gaines.

4.5.2.3 Source code for problem 2

Result of above run

 
c1=0.0012; 
c2=0.001; 
f=[c1,c2,0,0,0]; 
A=[8,3,-1,0,0; 
11,15,0,-1,0; 
25,6,0,0,-1]; 
b=[48,165,150]; 
[X,FVAL,EXITFLAG,OUTPUT]=linprog(f,[],[],A,b,zeros(size(f)),[]) 
Optimization terminated. 
X = 
4.0777 
8.0097 
8.6505 
0.0000 
0.0000 
FVAL = 
0.0129 
EXITFLAG = 
1 
OUTPUT = 
iterations: 6 
algorithm: 'interior-point-legacy' 
cgiterations: 0 
message: 'Optimization terminated.' 
constrviolation: 8.5265e-14 
firstorderopt: 4.0665e-10

4.5.3 HW 5 key solution

pict
pict
pict
pict
pict