4.1 Worked examples

As with original Kovacic algorithm, an input ode \begin {align} y^{\prime \prime }\left ( x\right ) +ay^{\prime }\left ( x\right ) +by\left ( x\right ) & =0\tag {1}\\ a,b & \in \mathbb {C} \left ( x\right ) \nonumber \end {align}

It is ﬁrst transformed to the following ode by eliminating the ﬁrst derivative \begin {equation} z^{\prime \prime }=rz\qquad r\in \mathbb {C} \left ( x\right ) \tag {2} \end {equation} Using what is known as the Liouville transformation given by\begin {equation} y=ze^{\frac {-1}{2}\int adx} \tag {3} \end {equation} Where it can be found that \(r\) in (2) is given by \begin {equation} r=\frac {1}{4}a^{2}+\frac {1}{2}a^{\prime }-b \tag {4} \end {equation} It is Eq. (2) (called DE from now on) which is solved and not Eq. (1). There are three steps for Saunders version. Case 1 and case 2 are handled in same way. In showing the steps, both Saunders paper (2) and Carolyn J. Smith paper (3) were used. Smith paper is more detailed and has some corrections to Saunders algorithm also.

There are 4 steps to the algorithm. Step 0 determines which case \(r\) belongs to (case 1 or 2 or 3 or non of these). Step 1 determines the ﬁxed \(e_{fixed},\theta _{fixed}\) and also \(e_{i},\theta _{i}\). Where \(i\) can be 0 and higher depending. (see below). Step 2 uses the \(e_{fixed},\) \(\theta _{fixed}\,\) and all the \(e_{i},\theta _{i}\) found in step 1 to determine the trial \(d,\Theta \). Here \(\Theta \) is used instead of \(\theta \) as in Smith paper so not confuse it with the \(\theta _{i}\) found in step 1. If trial number \(d\) can be found which is integer and positive then step 3 is now called. It is in step 3 where the minimal polynomial \(p\left ( x\right ) \) is found using the \(d\) and \(\Theta \). If such \(p\left ( x\right ) \) can be found then \(\omega \) is solved for and the solution for the ode \(z^{\prime \prime }=rz\) is now \(z=e^{\int \omega dx}\). If no solution \(p\left ( x\right ) \) can be found, then the next trials \(d,\Theta \) tried in order to ﬁnd \(p\left ( x\right ) \). This continues until all trials are tried or if solution is found. Below shows more details on each step. The trials \(d,\Theta \) are found by iterating over all possible set of values called \(s\). These sets of values are generated depending on case number and \(m\) value, where \(m\) is the number of terms in the square free factorization of \(t=t_{1}t_{2}^{2}t_{3}^{3}\cdots t_{m}^{m}\). How this is all done is given below in examples.

This step is similar to Kovacic algorithm. In it we determine necessary conditions for each case but it is done is more direct way in this version. Given \(y^{\prime \prime }=ry\), we write \(d=\frac {s}{t}\) then now we do square free factorization on \(t\) which gives\[ t=t_{1}t_{2}^{2}t_{3}^{3}\cdots t_{m}^{m}\] For example, if \(t=x^{2}\), then \(t_{1}=1,t_{2}=x\). And if \(t=3-x^{3}\) then \(t_{1}=-1,t_{2}=x^{3}-3\). And \(O\left ( \infty \right ) =\deg \left ( t\right ) -\deg \left ( s\right ) \). Then now we determine which case we are in by ﬁnding necessary conditions, This is done slightly diﬀerent from the original Kovacic. So at the end of this step we know if \(L=\left [ 1\right ] \) (case 1) or \(L=\left [ 1,2\right ] \) (case 1 and case 2), or \(L=\left [ 2\right ] \) (case 2 only) or \(L=\left [ 4,6,12\right ] \) (case 3 only) and so on.

The necessary conditions are based on the square free factorization on \(t=t_{1}t_{2}^{2}t_{3}^{3}\cdots t_{m}^{m}\) and is summarized in Carolyn J. Smith paper (3) as (these are all the same necessary conditions as from original Kovacic paper) but expressed in terms of the square free factorization of \(t=t_{1}t_{2}^{2}\cdots t_{m}^{m}\) where \(r=\frac {s}{t}\).

- \(L=\left [ 1\right ] \) (meaning case 1) if \(t_{i}=1\) for all odd \(i\geq 3\) (i.e. no odd order poles allowed other than \(1\)) and \(O\left ( \infty \right ) \) is even only. (i.e. allowed \(O\left ( \infty \right ) \) are \(\cdots ,-6,-4,-2,0,2,4,6,\cdots \).
- \(L=\left [ 2\right ] \) (meaning case 2) if \(t_{2}\neq 1\) or \(t_{i}\neq 1\) for any odd \(i\geq 3\). (i.e. only pole of order 2 is allowed, and then poles of order \(3,5,7,\cdots \) are allowed.
- \(L=\left [ 4,6,12\right ] \) (meaning case 3), if \(t_{i}=1\) for all \(i>2\) and \(O\left ( \infty \right ) \geq 2\). (i.e. poles of order 1,2 is only allowed).

This step has 4 parts (a,b,c,d).

part (a) Here the ﬁxed parts \(e_{fixed},\theta _{fixed}\) are calculated using \begin {align} e_{fixed} & =\frac {1}{4}\left ( \min \left ( O\left ( \infty \right ) ,2\right ) -\deg \left ( t\right ) -3\deg \left ( t_{1}\right ) \right ) \tag {5}\\ \theta _{fixed} & =\frac {1}{4}\left ( \frac {t^{\prime }}{t}+3\frac {t_{1}^{\prime }}{t_{1}}\right ) \tag {6} \end {align}

For an example, lets say that \(r=\frac {16x-3}{16x^{2}}\). Hence \(t=16x^{2}=t_{1}t_{2}^{2}\) where \(t_{1}=16,t_{2}=x\). And \(O\left ( \infty \right ) =2-1=1\). Therefore \(\deg \left ( t\right ) =2,\deg \left ( t_{1}\right ) =0\) and \(t^{\prime }=\frac {d}{dx}16x^{2}=32x\) and \(t_{1}^{\prime }=0\). The above becomes\begin {align*} e_{fixed} & =\frac {1}{4}\left ( \min \left ( 1,2\right ) -2-3\left ( 0\right ) \right ) \\ & =\frac {1}{4}\left ( 1-2\right ) \\ & =-\frac {1}{4}\\ \theta _{fixed} & =\frac {1}{4}\left ( \frac {32x}{16x^{2}}+3\left ( 0\right ) \right ) \\ & =\frac {1}{2x} \end {align*}

part (b)

Here the values \(e_{i},\theta _{i}\) are found for \(i=1\cdots k_{2}\) where \(k_{2}\) is the number of roots of \(t_{2}\). In other words, the number of poles of \(r\) that are of order \(2\). These will be the zeros of \(t_{2}\) in the above square free factorization of \(t\). Label these poles \(c_{1},c_{2},\cdots ,c_{k_{2}}\). For each \(c_{i}\) then \(e_{i}=\sqrt {1+4b}\) where \(b\) is the coeﬃcient of \(\frac {1}{\left ( x-c_{i}\right ) ^{2}}\) in the partial fraction expansion of \(r\) and \(\theta _{i}=\frac {e_{i}}{x-c_{i}}\). For an example, if \(t=x^{2}\). Then \(t=t_{1}t_{2}^{2}\) and \(t_{1}=1,t_{2}=x\). Hence the zeros of \(t_{2}\) are \(c_{1}=0\). There is only one zero. Hence \(k_{2}=1\) and we only have one iteration to do. Hence \(b=1,e_{1}=\sqrt {1+4\left ( 1\right ) }=\sqrt {5}\) and \(\theta _{e}=\frac {\sqrt {5}}{x-0}=\frac {\sqrt {5}}{x}\).

Part (c)

This part applied only to case 1. It generates additional values of \(e_{i},\theta _{i}\) in addition to what was generated in part (b). This is done for poles of \(r\) of order \(4,6,8,\cdots ,M\) if any exist. These are the roots of \(t_{4},t_{6},\cdots ,t_{M}\). These poles are labeled \(c_{k_{2}+1},c_{k_{2}+2},\cdots ,c_{k}\). The labeling starts from \(k_{2}\) since for the pole of order 2 in part b we used \(c_{1},c_{2},\cdots ,c_{k_{2}}\) for its zeros. Now we iterate for \(i\) from \(k_{2}+1\) to \(k\). For each pole we ﬁnd its \(e_{i}\) and \(\theta _{i}\). These are found similar to original Kovacic paper. Examples below will illustrate better how this is done.

Note that if there are no poles of order \(4,6,\cdots \) then \(k=k_{2}=M.\) The value \(M\) is used below to generate \(s\) sequences in step 2. What this means is that for case 1, if there are no poles of order \(4,6,\cdots \), then \(M\) is just the number of poles of \(r\) of order 2. For cases other than case 1, \(M\) is always number of poles of \(r\) of order 2. The checking on poles of order \(4,6,\cdots \) is only done for case 1.

Part(d)

Now we need to ﬁnd \(e_{0},\theta _{0}\). If \(O\left ( \infty \right ) >2\) then \(e_{0}=1,\theta _{0}=0\). But if \(O\left ( \infty \right ) =2\) then \(\theta _{0}=0\) and \(e_{0}=\sqrt {1+4b}\) where \(b\) is the coeﬃcient of \(\frac {1}{x^{2}}\) in the Laurent series expansion of \(r\) at \(\infty \). This is the same as was done in original Kovacic algorithm. If \(O\left ( \infty \right ) \) is none of the above two cases, then case 1 is handled on its own and examples below show how this is done. Otherwise for all other cases \(e_{0}=0,e_{0}=0\).

The above complete step 1, which is to generates the candidate \(e^{\prime }s\) and \(\theta ^{\prime }s\). In step 2, these are used to generate trials \(d\) and \(\theta \) and ﬁnd from them \(P\left ( x\right ) \) polynomial if possible.

In this step, we now have all the \(e_{i},\theta _{i}\) values found above in addition to \(e_{fix},\theta _{fix}\). Now we generate trial \(d\) using\begin {equation} d=\left ( n\right ) \left ( e_{fix}\right ) +s_{0}e_{0}-\sum _{i=1}^{M}s_{i}e_{i} \tag {7} \end {equation} Where \(n\) is the case number. For case 1, it will be \(n=1\). For case 2 it will be \(n=2\). For case 3 it will be \(4\) and \(6\) and \(12.\) If \(d\geq 0\) then we go and ﬁnd a trial \(\Theta \). We need to have both \(d,\Theta \) to go to the next step. \(\Theta \) is found using\begin {equation} \Theta =\left ( n\right ) \left ( \theta _{fix}\right ) +\sum _{i=0}^{M}s_{i}\theta _{i} \tag {8} \end {equation} What is \(s\) in the above? These are sets of all combinations generated based on case number \(n\) and \(m\) values which will be described below. So the above trial \(d\) and \(\Theta \) are generated for each such set \(s\). Each set \(s\) has values \(\left \{ s_{0},s_{1},\cdots s_{m}\right \} \) in it. So \(s_{i}\) above means the \(i^{th}\) element is the current set \(s\). There will be \(\left ( n+1\right ) ^{m+1}\) such diﬀerent \(s\) sets. For example, case \(1\), means \(n=1\) and if \(m=2\), which means \(t=t_{1}t_{2}^{2}\) then there will be \(2^{3}=8\) sets of \(s\) to try. For each such set, we generate \(d,\Theta \). If one set \(s\) gives a \(d\) which is an integer and positive, then \(\Theta \) is generated and then step 3 is called to calculate \(\omega \). If step 3 is successful then we stop since a solution is found. Hence step 3 takes as input the trial \(d\) and \(\Theta \) and is called repeatedly from step 2 until either solution \(y=e^{\int \omega dx}\) is found or until all sets \(s\) are used. This is done for each case number \(n\) which can be \(1,2,4,6\,\ \)or \(12\). Starting from case 1 to case 3 (recall that case 3 has \(n=4,6,12\,\ \)in it). Of course if any one case manages to ﬁnd a solution, then the algorithm stops.

Before going to step 3 description, We will show how the sets \(s\) are generated. This depends on value of \(n\) and \(M\). Recall that \(M\) is number of poles of \(r\) of order 2 for case 1 if there are no higher order poles. For example, for \(n=1\) and say \(M=2\) then \(8\) diﬀerent sets \(s\) are generated. Based on all diﬀerent permutations of \(\left \{ \pm \frac {n}{2},\pm \frac {n}{2},\cdots ,\pm \frac {n}{2}\right \} \). There are \(M+1\) entries, because entries are indexed from \(0\) to \(M\). Hence for \(M=0\) (which will happen if where are no poles), then there are \(\left ( n+1\right ) ^{1}\) entries. For example for \(n=1\) this means two entries given by \(\left \{ \pm \frac {1}{2}\right \} \) which \(s=\left \{ \frac {1}{2}\right \} \) and \(s=\left \{ -\frac {1}{2}\right \} \) to try\[ \Lambda =\begin {array} [c]{c}-\frac {1}{2}\\ +\frac {1}{2}\end {array} \] For \(n=1,M=2\) then there are \(\left ( 1+1\right ) ^{3}=8\) entries. We have all combinations of \(\left \{ \pm \frac {1}{2},\pm \frac {1}{2},\pm \frac {1}{2}\right \} \). This results in the following matrix\[ \Lambda =\begin {bmatrix} -\frac {1}{2} & -\frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & -\frac {1}{2} & +\frac {1}{2}\\ -\frac {1}{2} & +\frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2} & +\frac {1}{2}\\ +\frac {1}{2} & -\frac {1}{2} & -\frac {1}{2}\\ +\frac {1}{2} & -\frac {1}{2} & +\frac {1}{2}\\ +\frac {1}{2} & +\frac {1}{2} & -\frac {1}{2}\\ +\frac {1}{2} & +\frac {1}{2} & +\frac {1}{2}\end {bmatrix} \] But if \(n=1\) and \(M=1\) then there are \(\left ( 1+1\right ) ^{2}=4\) entries. All combinations of\(\left \{ \pm \frac {1}{2},\pm \frac {1}{2}\right \} \) and the matrix is\[ \Lambda =\begin {bmatrix} -\frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & +\frac {1}{2}\\ +\frac {1}{2} & -\frac {1}{2}\\ +\frac {1}{2} & +\frac {1}{2}\end {bmatrix} \] Each row in the above matrix \(S\) is one set \(s\) to try. To be more clear, in the equation \(\Theta =\left ( n\right ) \left ( e_{fix}\right ) +\sum _{i=0}^{M}s_{i}e_{i}\) the \(s_{i}\) in the equation means the \(i^{th}\) entry in that speciﬁc \(s\) set we are using at the moment, which happens to be one row of the matrix \(\Lambda \). For example, if we are trying the second row in \(\Lambda \), then \(s_{0}=-\frac {1}{2},s_{1}=-\frac {1}{2},s_{2}=+\frac {1}{2}\). For the case \(n=1\) and \(M=3\), then \(\left \{ \pm \frac {1}{2},\pm \frac {1}{2},\pm \frac {1}{2},\pm \frac {1}{2}\right \} \). There is \(\left ( 2\right ) ^{4}=16\) diﬀerent sets \(s~\) (or 16 rows in the matrix \(\Lambda \)). The matrix \(\Lambda \) is\[ \Lambda =\begin {bmatrix} -\frac {1}{2} & -\frac {1}{2} & -\frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & -\frac {1}{2} & -\frac {1}{2} & \frac {1}{2}\\ -\frac {1}{2} & -\frac {1}{2} & \frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & -\frac {1}{2} & \frac {1}{2} & \frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2} & -\frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2} & -\frac {1}{2} & \frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2} & \frac {1}{2} & -\frac {1}{2}\\ -\frac {1}{2} & \frac {1}{2} & \frac {1}{2} & \frac {1}{2}\\ \frac {1}{2} & -\frac {1}{2} & -\frac {1}{2} & -\frac {1}{2}\\ \frac {1}{2} & -\frac {1}{2} & -\frac {1}{2} & \frac {1}{2}\\ \frac {1}{2} & -\frac {1}{2} & \frac {1}{2} & -\frac {1}{2}\\ \frac {1}{2} & -\frac {1}{2} & \frac {1}{2} & \frac {1}{2}\\ \frac {1}{2} & \frac {1}{2} & -\frac {1}{2} & -\frac {1}{2}\\ \frac {1}{2} & \frac {1}{2} & -\frac {1}{2} & \frac {1}{2}\\ \frac {1}{2} & \frac {1}{2} & \frac {1}{2} & -\frac {1}{2}\\ \frac {1}{2} & \frac {1}{2} & \frac {1}{2} & \frac {1}{2}\end {bmatrix} \] If it was case 2 which means \(n=2\), then if \(M=2\), then we have all diﬀerent permutations of \(\,1,-1,0\) in each entry. This gives \(3^{3}=27\) diﬀerent sets to try\[ \Lambda =\begin {bmatrix} -1 & -1 & -1\\ -1 & -1 & 0\\ -1 & -1 & +1\\ -1 & 0 & -1\\ -1 & 0 & 0\\ -1 & 0 & +1\\ \vdots & \vdots & \vdots \\ +1 & +1 & +1 \end {bmatrix} \] And if \(n=2,M=3\) then this gives \(3^{4}=81\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -1 & -1 & -1 & -1\\ -1 & -1 & -1 & 0\\ -1 & -1 & -1 & +1\\ -1 & -1 & 0 & -1\\ -1 & -1 & 0 & 0\\ -1 & -1 & 0 & +1\\ \vdots & \vdots & \vdots & \vdots \\ +1 & +1 & +1 & +1 \end {bmatrix} \] And if \(n=4,M=2\) then this gives \(5^{3}=125\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -2 & -2 & -2\\ -2 & -2 & -2\\ -2 & -2 & -2\\ -2 & -2 & -2\\ -2 & -2 & -2\\ \vdots & \vdots & \vdots \\ +2 & +2 & +2 \end {bmatrix} \] And if \(n=4,M=3\) then this gives \(5^{4}=625\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -2 & -2 & -2 & -2\\ -2 & -2 & -2 & -1\\ -2 & -2 & -2 & 0\\ -2 & -2 & -2 & +1\\ -2 & -2 & -2 & +2\\ \vdots & \vdots & \vdots & \vdots \\ +2 & +2 & +2 & +2 \end {bmatrix} \] And if \(n=6,M=2\) then this gives \(7^{3}=343\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -3 & -3 & -3\\ -2 & -2 & -2\\ -2 & -2 & -1\\ -2 & -2 & -0\\ -2 & -2 & +1\\ -2 & -2 & +2\\ -2 & -2 & +3\\ \vdots & \vdots & \vdots \\ +3 & +3 & +3 \end {bmatrix} \] And if \(n=6,M=3\) then this gives \(7^{4}=2401\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -3 & -3 & -3 & -3\\ -3 & -3 & -3 & -2\\ -3 & -3 & -3 & -1\\ -3 & -3 & -3 & 0\\ -3 & -3 & -3 & +1\\ -3 & -3 & -3 & +2\\ -3 & -3 & -3 & +3\\ \vdots & \vdots & \vdots & \vdots \\ +3 & +3 & +3 & +3 \end {bmatrix} \] And if \(n=12,M=2\) then this gives \(13^{3}=2197\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -12 & -12 & -12\\ -12 & -12 & -11\\ -12 & -12 & -10\\ \vdots & \vdots & \vdots \\ -12 & -12 & +12\\ \vdots & \vdots & \vdots \\ +12 & +12 & +12 \end {bmatrix} \] And if \(n=12,M=3\) then this gives \(13^{4}=28561\) diﬀerent sets \(s\)\[ \Lambda =\begin {bmatrix} -12 & -12 & -12 & -12\\ -12 & -12 & -12 & -11\\ -12 & -12 & -12 & -10\\ \vdots & \vdots & \vdots & \vdots \\ -12 & -12 & -12 & +12\\ \vdots & \vdots & \vdots & \vdots \\ +12 & +12 & +12 & +12 \end {bmatrix} \] And so on.

The input to this step is the integer \(d\) and \(\Theta \) found from Eq (7,8) as described in step 2 and also \(r\) which comes from \(z^{\prime \prime }=rz\).

This step is broken into these parts. First we ﬁnd the \(p_{-1}\left ( x\right ) \) polynomial. If we are to solve for its coeﬃcients, then next we build the minimal polynomial from the \(p_{i}\left ( x\right ) \) polynomials constructed during ﬁnding \(p_{-1}\left ( x\right ) \). The minimal polynomial \(p_{\min }\left ( x\right ) \) will be a function of \(\omega \). Next we solve for \(\omega \) from \(p_{\min }\left ( x\right ) =0\). If this is successful, then we have found \(\omega \) and the ﬁrst solution to the ode \(z^{\prime \prime }=rz\) is \(e^{\int \omega dx}\,\). Below shows how this is done.

We start by forming a polynomial\[ p\left ( x\right ) =x^{d}+a_{d-1}x^{d-1}+\cdots +a_{0}\] Notice that \(x^{d}\) has coeﬃcient \(1\). The goal is to solve for the \(a_{i}\) coeﬃcient. Now depending on case number \(n\), we do the following. If case \(n=1\) then\begin {align} p_{1} & =-p\tag {9}\\ p_{0} & =-p_{1}^{\prime }-\Theta p_{1}\nonumber \\ p_{-1} & =-p_{0}^{\prime }-\Theta p_{0}-\left ( 1\right ) \left ( 1\right ) rp_{1}\nonumber \end {align}

And it is \(p_{-1}=0\) which is solved for the coeﬃcients \(a_{i}\). For the case \(n=2\) we ﬁnd \(p\) as follows\begin {align} p_{2} & =-p\tag {10}\\ p_{1} & =-p_{2}^{\prime }-\Theta p_{2}\nonumber \\ p_{0} & =-p_{1}^{\prime }-\Theta p_{1}-\left ( 1\right ) \left ( 2\right ) rp_{2}\nonumber \\ p_{-1} & =-p_{0}^{\prime }-\Theta p_{0}-\left ( 2\right ) \left ( 1\right ) rp_{1}\nonumber \end {align}

And it is \(p_{-1}=0\) which is solved for the coeﬃcients \(a_{i}\). For the case \(n=4\) we ﬁnd \(p\) as follows\begin {align} p_{4} & =-p\tag {11}\\ p_{3} & =-p_{4}^{\prime }-\Theta p_{4}\nonumber \\ p_{2} & =-p_{3}^{\prime }-\Theta p_{3}-\left ( 1\right ) \left ( 4\right ) rp_{4}\nonumber \\ p_{1} & =-p_{2}^{\prime }-\Theta p_{2}-\left ( 2\right ) \left ( 3\right ) rp_{3}\nonumber \\ p_{0} & =-p_{1}^{\prime }-\Theta p_{1}-\left ( 3\right ) \left ( 2\right ) rp_{2}\nonumber \\ p_{-1} & =-p_{0}^{\prime }-\Theta p_{0}-\left ( 4\right ) \left ( 1\right ) rp_{1}\nonumber \end {align}

And it is \(p_{-1}=0\) which is solved for the coeﬃcients \(a_{i}\). For the case \(n=6\) we ﬁnd \(p\) as follows\begin {align} p_{6} & =-p\tag {12}\\ p_{5} & =-p_{6}^{\prime }-\Theta p_{6}\nonumber \\ p_{4} & =-p_{5}^{\prime }-\Theta p_{5}-\left ( 1\right ) \left ( 6\right ) rp_{6}\nonumber \\ p_{3} & =-p_{4}^{\prime }-\Theta p_{4}-\left ( 2\right ) \left ( 5\right ) rp_{5}\nonumber \\ p_{2} & =-p_{3}^{\prime }-\Theta p_{3}-\left ( 3\right ) \left ( 4\right ) rp_{4}\nonumber \\ p_{1} & =-p_{2}^{\prime }-\Theta p_{2}-\left ( 4\right ) \left ( 3\right ) rp_{3}\nonumber \\ p_{0} & =-p_{1}^{\prime }-\Theta p_{1}-\left ( 5\right ) \left ( 2\right ) rp_{2}\nonumber \\ p_{-1} & =-p_{0}^{\prime }-\Theta p_{0}-\left ( 6\right ) \left ( 1\right ) rp_{1}\nonumber \end {align}

And it is \(p_{-1}=0\) which is solved for the coeﬃcients \(a_{i}\). For the case \(n=12\) we ﬁnd \(p\) as follows\begin {align} p_{12} & =-p\tag {13}\\ p_{11} & =-p_{12}^{\prime }-\Theta p_{12}\nonumber \\ p_{10} & =-p_{11}^{\prime }-\Theta p_{11}-\left ( 1\right ) \left ( 12\right ) rp_{12}\nonumber \\ p_{9} & =-p_{10}^{\prime }-\Theta p_{10}-\left ( 2\right ) \left ( 11\right ) rp_{11}\nonumber \\ p_{8} & =-p_{9}^{\prime }-\Theta p_{9}-\left ( 3\right ) \left ( 10\right ) rp_{10}\nonumber \\ p_{7} & =-p_{8}^{\prime }-\Theta p_{8}-\left ( 4\right ) \left ( 9\right ) rp_{9}\nonumber \\ p_{6} & =-p_{7}^{\prime }-\Theta p_{7}-\left ( 5\right ) \left ( 8\right ) rp_{8}\nonumber \\ p_{5} & =-p_{6}^{\prime }-\Theta p_{6}-\left ( 6\right ) \left ( 7\right ) rp_{7}\nonumber \\ p_{4} & =-p_{5}^{\prime }-\Theta p_{5}-\left ( 7\right ) \left ( 6\right ) rp_{6}\nonumber \\ p_{3} & =-p_{4}^{\prime }-\Theta p_{4}-\left ( 8\right ) \left ( 5\right ) rp_{5}\nonumber \\ p_{2} & =-p_{3}^{\prime }-\Theta p_{3}-\left ( 9\right ) \left ( 4\right ) rp_{4}\nonumber \\ p_{1} & =-p_{2}^{\prime }-\Theta p_{2}-\left ( 10\right ) \left ( 3\right ) rp_{3}\nonumber \\ p_{0} & =-p_{1}^{\prime }-\Theta p_{1}-\left ( 11\right ) \left ( 2\right ) rp_{2}\nonumber \\ p_{-1} & =-p_{0}^{\prime }-\Theta p_{0}-\left ( 12\right ) \left ( 1\right ) rp_{1}\nonumber \end {align}

If we are able to solve for all the \(a_{i}\) by solving \[ p_{-1}\left ( x\right ) =0 \] Then we now have determined \(p\left ( x\right ) \). This is used to ﬁnd \(\omega \) as follows. For the case \(n=1\) \[ \omega =\frac {p^{\prime }}{p}+\Theta \] For the case \(n=2\) \(\omega \) is found by solving \[ \omega ^{2}-\phi \omega +\frac {\phi ^{\prime }}{2}+\frac {\phi ^{2}}{2}-r=0 \] Where \(\phi =\frac {p^{\prime }}{p}+\Theta \).

Where \(p_{0},p_{1},p_{2}\) are from Eq (10) above. For the case \(n=4\) then\[ p_{\min }=\frac {1}{4!}p_{0}+\frac {1}{3!}p_{1}\omega +\frac {1}{2!}p_{2}\omega ^{2}+p_{3}\omega ^{3}+p_{4}\omega ^{4}\] Where \(p_{0},p_{1},p_{2},p_{3},p_{4}\) are from Eq (11) above. For the case \(n=6\) then\[ p_{\min }=\frac {1}{6!}p_{0}+\frac {1}{5!}p_{1}\omega +\frac {1}{4!}p_{2}\omega ^{2}+\frac {1}{3!}p_{3}\omega ^{3}+\frac {1}{2!}p_{4}\omega ^{4}+p_{5}\omega ^{5}+p_{6}\omega ^{6}\] Where \(p_{0},p_{1},p_{2},p_{3},p_{4},p_{5},p_{6}\) are from Eq (12) above. For the case \(n=12\) then

\[ p_{\min }=\frac {1}{12!}p_{0}+\frac {1}{11!}p_{1}\omega +\frac {1}{10!}p_{2}\omega ^{2}+\frac {1}{9!}p_{3}\omega ^{3}+\frac {1}{8!}p_{4}\omega ^{4}+\frac {1}{7!}p_{5}\omega ^{5}+\frac {1}{6!}p_{6}\omega ^{6}+\frac {1}{5!}p_{7}\omega ^{7}+\frac {1}{4!}p_{8}\omega ^{8}+\frac {1}{3!}p_{9}\omega ^{9}+\frac {1}{2!}p_{10}\omega ^{10}+p_{11}\omega ^{11}+p_{12}\omega ^{12}\]

Where \(p_{i}\) for \(i=0\cdots 12\) are from Eq (13) above. In each case, we now solve for \[ p_{\min }\left ( x\right ) =0 \] For \(\omega \). If this is successful, then now we have to verify the solution satisﬁes the Riccati ODE. \(\omega \) must satisfy in all case the following equation\[ \omega ^{\prime }+\omega ^{2}=r \] If it does, then we have solved the \(z^{\prime \prime }=rz\) ode \(z=e^{\int \omega dx}\) and also the original \(y^{\prime \prime }+ay^{\prime }+by=0\) ode. This completes the Kovacic algorithm. Examples are given below showing how to implement the above to solve number of ode’s.