Divide and Conquer Recurrence Relations
Suppose that a recursive algorithm divides a problem of size \(n\) into a subproblems. Assume each subproblem is of size \(\frac{n}{b}\). Suppose \(f(n)\) extra operations are needed in the conquer step. Then \(T(n)\) represents the number of operations to solve a problem of size \(n\) satisfies the following recurrence relation:
where:
- \(T(n)\) represents the total time complexity for solving a problem of size \(n\).
- \(a\) represents number of subproblems the problem is divided into.
- \(\frac{n}{b}\) represents the size of each subproblem.
- \(f(n)\) represents the cost of dividing the problem into subproblems and combining their solutions. This is the work done outside of the recursive calls.
The recursion \(T(n) = aT(\frac{n}{b}) + f(n)\) stops:
- When \(T(n)\) is applied to the base case (usually \(n=1\)). The recursion halts at this point because the problem size can no longer be divided further.
- At the depth of the recursion tree, which corresponds to the number of times the problem size \(n\) can be divided by \(b\) until reaching the base case.
The Asymptotic Growth of Geometric Series
Let \(r\) be a positive constant. Then,
That is, if geometric progression is increasing, then the series grows as its last (and largest) term. If it is constant, then it grows as the number of terms. Finally, if it is decreasing, it is bounded by a constant.
The sum of the first \(n+1\) terms of a geometric sequence is given by the formula:
where the first term is \(1\), and the common ratio is \(r\).
Finding the lower bound or upper bound of a function involves identifying the dominant term because the dominant term determines the function's growth rate as the input size increases significantly.
Case \(r \gt 1\):
Each term in the geometric series grows exponentially, starting from \(r^{0}\) (i.e., \(1\)) and ending with \(r^{n}\). As the series progresses, the terms increase rapidly, with the last term dominating the others. As a result, the sum is always greater \(r^{n}\) for \(n \gt 0\). This implies that the sum is bounded below by \(r^{n}\).
As \(n\) becomes large, the term \(r^{n+1}\) in the numerator of the geometric series formula dominates the constant \(1\), and the sum of the geometric series can be approximately expressed as \(\frac{r^{n+1}}{r - 1}\).
Thus, the upper bound of the geometric series sum is:
We have found both the lower bound and the upper bound for the sum of the geometric series. Thus, we have:
Therefore, the asymptotic behavior of \(S(n)\) is:
Case \(r = 1\):
For \(r=1\), the sum of the series is simply the number of terms. The sum grows linearly with \(n\), and we can express the sum as:
Therefore, the asymptotic behavior of \(S(n)\) is:
Case \(r \lt 1\):
Each term in the geometric series decays exponentially, starting from \(r^{0}\) (i.e., \(1\)) and ending with \(r^{n}\). As the series progresses, the terms decrease rapidly, with the first term dominating the others. As a result, the sum is always greater \(1\) for \(n \gt 0\). This implies that the sum is bounded above by \(1\).
As \(n\) becomes large, the term \(r^{n+1}\) in the numerator of the geometric series formula approaches \(0\), and the sum of the geometric series can be approximately expressed as \(\frac{1}{1-r}\).
Thus, the upper bound of the geometric series sum is:
We have found both the lower bound and the upper bound for the sum of the geometric series. Thus, we have:
Therefore, the asymptotic behavior of \(S(n)\) is:
Simplified Version of Master Theorem
The simplified version of master theorem is a formula for analyzing the asymptotic behavior of divide-and-conquer recurrences of the form:
where \(a \geq 1\) and \(b \geq 2\) are integer constants, \(c \geq 0\), and \(k \geq 0\) are real constants. \(a\), \(b\), \(c\) and \(d\) do not depend on \(n\). The first term, \(a \cdot T(\frac{n}{b})\), represents the time required for the a recursive calls, each to an input of size \(\frac{n}{b}\); and the second term, \(c \cdot n^{k}\), represents the time required to divide up the input into a pieces of size \(\frac{n}{b}\) each and to combine the results of the recursive calls into the result of the main call.
Let \(a \geq 1\) and \(b \gt 1\) be integer constants. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
Analogus results hold for the \(O\) and \(\Omega\) notations.
Consider the tree of recursive calls made by the algorithm:
- At level \(0\) (the root), we have one call for input of size \(n\). The time needed by this call, exclusive of the time needed by the calls it makes (i.e., the time to divide up its input and to combine the results of the calls it makes), is \(cn^{k}\) — the second term in the definition of the recurrence.
- At level \(1\) we have the calls made by the call at level \(0\): there are a such calls (one for each subproblem), each working on an input of size \(\frac{n}{b}\). The time needed by each of these calls, exclusive of the time needed by the calls it makes, is \(c\left(\frac{n}{b}\right)^{k}\), so the total time needed by these a calls is \(ac\left(\frac{n}{b}\right)^{k}\).
- At level \(2\) we have the calls made by the calls at level \(1\): there are \(a^{2}\) such calls, each working on an input of size \(\frac{n}{b^{2}}\). Reasoning as before, the total time needed by these \(a^{2}\) calls is \(a^{2}c\left(\frac{n}{b^{2}}\right)^{k}\).
- In general, at level \(i\), we have \(a^{i}\) calls, each working on an input of size \(\frac{n}{b^{i}}\). The total time needed for these calls is \(a^{i}c\left(\frac{n}{b^{i}}\right)^{k}\).
- This continues for every integer \(i = 0, 1, 2, \dots, \log_{b}(n)\). For \(i = \log_{b}(n)\), the input size is \(1\), and we have reached the base case of the recursion. At the base case, the problem is small enough that no further division is needed, and the solution can be solve directly.
Thus, the total time required for all the calls at all levels is:
Let
The series is a geometric series with ratio \(r=\frac{a}{b^{k}}\). Its growth rate depends on whether \(a\) is smaller, equal, or larger than \(b^{k}\). Consider these three cases.
Case 1: \(a \gt b^{k}\). Given the condition \(a \gt b^{k}\), which implies that the ratio \(r=\frac{a}{b^{k}} \gt 1\).
\(\sum_{i=0}^{\log_{b}(n)-1} r^{i}\) is a geometric series with the common ratio \(r=\frac{a}{b^{k}} \gt 1\) which grows exponentially, and can be bounded both below and above as follows:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both below and above by constant multiples of \(n^{\log_{b}(a)}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 2: \(a = b^{k}\). Given the condition \(a = b^{k}\), which implies that the ratio \(r=1\).
\(\sum_{i=0}^{\log_{b}(n)-1} 1\) is an arithmetic series where every term in the series is constant. The sum is simply the constant value multiplied by the number of terms.
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both below and above by constant multiples of \(n^{k}\log_{b}(n)\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 3: \(a \lt b^{k}\). Given the condition \(a \lt b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a} \lt 1\).
\(\sum_{i=0}^{\log_{b}(n)-1} r^{i}\) is a geometric series with the common ratio \(r=\frac{a}{b^{k}} \lt 1\) which grows exponentially, and can be bounded both below and above as follows:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both below and above by constant multiples of \(n^{k}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Simplified Version of Master Theorem with Log Factors
A recurrence such as
does not exactly fit the form of the master theorem above, since the additive term \(n \log_{2}(n)\) does not look like \(n^{k}\) for some constant \(k\). Such a recurrence can be handled by a more general form of the theorem, as follows.
Let \(a \geq 1\) and \(b \gt 1\) be integer constants. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
Analogus results hold for the \(O\) and \(\Omega\) notations.
Consider the tree of recursive calls made by the recurrence:
- At level \(0\) (the root), there is one call with an input size of \(n\). The time required for this call, excluding the time needed by the recursive calls it spawns (i.e., the time to divide up its input and to combine the results of the calls it makes), is \(cn^{k} \log_{b}^{p}(n)\).
- At level \(1\), The single call at level 0 spawns \(a\) recursive calls, each working on a subproblem of size \(\frac{n}{b}\). Each of these calls requires \(c\left(\frac{n}{b}\right)^{k} \log_{b}^{p}\left(\frac{n}{b}\right)\) time, excluding their own recursive calls. The total time required at this level is therefore \(ac\left(\frac{n}{b}\right)^{k} \log_{b}^{p}\left(\frac{n}{b}\right)\).
- At level \(2\), The \(a\) calls from level \(1\) each spawn \(a\) new calls, resulting in \(a^{2}\) calls at this level. Each of these calls works on an input of size \(\frac{n}{b^{2}}\). The total time required at this level is \(a^{2}c\left(\frac{n}{b^{2}}\right)^{k} \log_{b}^{p}\left(\frac{n}{b^{2}}\right)\).
- In general, at level \(i\), there are \(a^{i}\) calls, each working on an input of size \(\frac{n}{b^{i}}\). The total time required for all calls at this level is \(a^{i}c\left(\frac{n}{b^{i}}\right)^{k} \log_{b}^{p}\left(\frac{n}{b^{i}}\right)\).
- This process continues until \(i = log_{b}(n)\), where the input size reduces to \(1\). At this base case, the problem is small enough to be solved directly without further recursive calls.
Thus, the total time required for all the calls at all levels is:
Let
The series \(\sum_{i=0}^{\log_{b}(n)-1} \left(\frac{a}{b^{k}}\right)^{i} \left(\log_{b}(n)-i\right)^{p}\) can be simplified by reindexing the terms to make the summation easier to analyze. Define \(j=\log_{b}(n)−i\), which implies \(i=\log_{b}(n)−j\). When \(i=0\), \(j=\log_{b}(n)\), and when \(i=\log_{b}(n)-1\), \(j=1\). Rewriting the summation in terms of \(j\), we have:
Subtituting into \(g(n)\) yields
The series is classified as a polynomial-geometric series because the terms involve both a geometric factor \(\left(\frac{b^{k}}{a}\right)^{j}\) and a polynomial factor \(j^{p}\). Its growth rate depends on whether \(a\) is smaller, equal, or larger than \(b^{k}\). Consider these cases.
Case 1: \(a \gt b^{k}\). Given the condition \(a \gt b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a} \lt 1\).
Since \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) is a partial sum of \(\sum_{j=1}^{\infty}r^{j} j^{p}\) which converges to a constant , it follows that
To prove that the series \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) for \(r \lt 1\) converges to a constant, we will use techniques from calculus, especially the concept of generating functions and differentiation.
We begin with the basic geometric series:
The series converges to a constant for \(|x| \lt 1\).
To find \(\sum_{j=1}^{\infty} jr^{j}\), we differentiate \(\sum_{j=1}^{\infty} r^{j}\) with respect to \(r\):
Now, multiply both sides by \(r\)
This confirms that for \(p=1\), the series \(\sum_{j=1}^{\infty} jr^{j}\) converges to a constant, and it does not depend on \(n\).
To find \(\sum_{j=1}^{\infty} j^{2}r^{j}\), we differentiate \(\sum_{j=1}^{\infty} jr^{j}\) with respect to \(r\):
Now, multiply both sides by \(r\)
This confirms that for \(p=2\), the series \(\sum_{j=1}^{\infty} j^{2}r^{j}\) converges to a constant, and it does not depend on \(n\).
For a general \(p\), we can continue this differentiation process \(p\) times.
Therefore, based on the previous conclusion, the series \(\sum_{j=1}^{\infty} j^{2}r^{j}\) converges to a constant, and it does not depend on \(n\).
Since each term in the series \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) is smaller than or equal to the first term, it can serve as a good lower bound for the series.
Thus, \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) can be bounded both below and above as follows:
Subtituting into \(g(n)\), where \(r=\frac{b^{k}}{a}\), yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 2: \(a = b^{k}\) and \(p \gt -1\). Given the condition \(a = b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a}=1\). Subtituting into \(g(n)\) yields
\(\sum_{j=1}^{\log_{b}(n)} j^{p}\) is a sum of powers of integers, and can be bounded both below and above using integral approximation:
Subtituting into \(g(n)\) yields
To factor out \(\log_{b}^{p+1}(n)\) from \(\left(\log_{b}(n)+1\right)^{p+1}\), we use the binomial expansion.
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\log_{b}^{p+1}(n)\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 3: \(a = b^{k}\) and \(p = -1\). Given the condition \(a = b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a}=1\). Subtituting into \(g(n)\) yields
The expression \(\sum_{j=1}^{\log_{b}(n)} \frac{1}{j}\), representing a partial sum of the harmonic series.
The harmonic series, represented by \(H_{n}=\sum_{k=1}^{n} \frac{1}{j}\), can be bounded both below and above as follows:
where \(H_{n}\) is the \(n\)-th harmonic number.
As a result, the partial sum of \(\sum_{j=1}^{\log_{b}(n)} \frac{1}{j}\) satisfies the inequality:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\log_{b}(\log_{b}(n))\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 4: \(a = b^{k}\) and \(p \lt -1\). Given the condition \(a = b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a}=1\). Subtituting into \(g(n)\) yields
Let \(p=-s\), which simplifies the \(g(n)\) to
The series \(\sum_{j=1}^{\log_{b}(n)} \frac{1}{j^{s}}\) is a partial sum of the p-series.
For \(s \gt 1\), the series \(\sum_{k=1}^{n} \frac{1}{k^{s}}\) converges and can be bounded above and below using integral approximation:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 5: \(a \lt b^{k}\) and \(p \gt 0\). Given the condition \(a \lt b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a} \gt 1\).
To find the sum \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) for \(r \gt 1\) and large \(n\), we will use techniques from calculus, especially the concept of generating functions and differentiation.
We begin with the basic geometric series:
For large \(t\), the sum of the geometric series can be approximated as follows
To find \(\sum_{j=1}^{t} jr^{j}\), we differentiate \(\sum_{j=1}^{t} r^{j}\) with respect to \(r\):
Now, multiply both sides by \(r\)
To find \(\sum_{j=1}^{t} j^{2}r^{j}\), we differentiate \(\sum_{j=1}^{t} jr^{j}\) with respect to \(r\):
Now, multiply both sides by \(r\)
To find \(\sum_{j=1}^{t} j^{3}r^{j}\), we differentiate \(\sum_{j=1}^{t} jr^{j}\) with respect to \(r\):
Now, multiply both sides by \(r\)
For a general \(p\), we can continue this differentiation process \(p\) times.
where \(L(r)\) is a rational function.
The function \(L(r)\) has the form
where N(r) is a polynomial function.
Therefore, let \(t=\log_{b}(n)\), which gives the upper bound:
Since each term in the sum \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) is smaller than or equal to the last term, it can serve as a good lower bound for the series.
Thus, \(\sum_{j=1}^{\log_{b}(n)} r^{j} j^{p}\) can be bounded both below and above as follows:
Subtituting into \(g(n)\), where \(r=\frac{b^{k}}{a}\), yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{k}\log_{b}^{p}(n)\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 6: \(a \lt b^{k}\) and \(p \lt 0\). Given the condition \(a \lt b^{k}\), which implies that the ratio \(r=\frac{b^{k}}{a} \gt 1\).
Let \(s=-p\). Subtituting into \(g(n)\) yields
Since \(\frac{r^{j}}{j^{s}} \leq r^{j}\), the series can be bounded above by a geometric series.
Since each term in the series \(\sum_{j=1}^{\log_{b}(n)} \frac{r^{j}}{j^{s}}\) is smaller than or equal to the last term, it can serve as a good lower bound for the sum.
Thus, \(\sum_{j=1}^{\log_{b}(n)} \frac{r^{j}}{j^{s}}\) can be bounded both below and above as follows:
Subtituting into \(g(n)\), where \(r=\frac{b^{k}}{a}\), yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{k}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
General Version of Master Theorem
The general version of master theorem is a formula for analyzing the time complexity of divide-and-conquer recurrences of the form:
where \(a \geq 1\) and \(b \geq 2\) with \(f\) asymptotically positive (Asymptotically positive mean that the function is positive for all sufficiently large \(n\)).
This recurrence describes an algorithm that divides a problem of size \(n\) into a subproblems, each of size \(\frac{n}{b}\), and solves them recursively. Note that \(\frac{n}{b}\) might not be an integer, but replacing \(T\lceil(\frac{n}{b}\rceil)\) with \(T\lceil(\frac{n}{b}\rceil)\) or \(T\lceil(\lfloor\frac{n}{b}\rfloor\rceil)\) does not affect the asymptotic behavior of the recurrence. So we will just ignore floors and ceilings.
The master theorem compares the function \(n^{\log_{b}(a)}\) to the function \(f(n)\). Intuitively, if \(n^{\log_{b}(a)}\) is larger (by a polynomial factor), then the solution is \(T(n) = \Theta(n^{\log_{b}(a)})\). If \(f(n)\) is larger (by a polynomial factor), then the solution is \(T(n) = \Theta(f(n))\). If they are the same size, then we multiply by a logarithmic factor.
Let \(a \geq 1\) and \(b \gt 1\) be integer constants, and let \(f(n)\) be an asymptotically positive function. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
Consider the tree of recursive calls made by the algorithm:
- At level \(0\) (the root), there is one call with an input size of \(n\). The time required for this call, excluding the time needed by the recursive calls it spawns (i.e., the time to divide up its input and to combine the results of the calls it makes), is \(f(n)\).
- At level \(1\), The single call at level 0 spawns \(a\) recursive calls, each working on a subproblem of size \(\frac{n}{b}\). Each of these calls requires \(f\left(\frac{n}{b}\right)\) time, excluding their own recursive calls. The total time required at this level is therefore \(af\left(\frac{n}{b}\right)\).
- At level \(2\), The \(a\) calls from level \(1\) each spawn \(a\) new calls, resulting in \(a^{2}\) calls at this level. Each of these calls works on an input of size \(\frac{n}{b^{2}}\). The total time required at this level is \(a^{2}f\left(\frac{n}{b^{2}}\right)\).
- In general, at level \(i\), there are \(a^{i}\) calls, each working on an input of size \(\frac{n}{b^{i}}\). The total time required for all calls at this level is \(a^{i}f\left(\frac{n}{b^{i}}\right)\).
- This process continues until \(i = \lfloor\log_{b}(n)\rfloor\), where the input size reduces to \(1\). At this base case, the problem is small enough to be solved directly without further recursive calls.
Thus, the total time required for all the calls at all levels is:
Let
Case 1: \(f(n) = O\left(n^{\log_{b}(a)-\epsilon}\right)\). Since we have \(f(n) = O\left(n^{\log_{b}(a)-\epsilon}\right)\), which implies that \(f\left(\frac{n}{b^{i}}\right) = O\left(\left(\frac{n}{b^{i}}\right)^{\log_{b}(a)-\epsilon}\right)\). Subtituting into \(g(n)\) yields
Using the meaning of the \(O\)-notation, there exist constants \(c \gt 0\) and \(n_{0} \gt 0\) such that for all \(n \geq n_{0}\):
\(\sum_{i=0}^{\log_{b}(n)-1} \left(b^{\epsilon}\right)^{i}\) is a geometric series with the common ratio \(b^{\epsilon}\) which grows exponentially, and can be bounded above as follows:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded above by constant multiples of \(n^{\log_{b}(a)}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 2: \(f(n) = \Theta\left(n^{\log_{b}(a)}\right)\). Since we have \(f(n) = \Theta\left(n^{\log_{b}(a)}\right)\), which implies that \(f\left(\frac{n}{b^{i}}\right) = \Theta\left(\left(\frac{n}{b^{i}}\right)^{\log_{b}(a)}\right)\). Subtituting into \(g(n)\) yields
Using the meaning of the \(\Theta\)-notation, there exist constants \(c_{1} \gt 0\), \(c_{2} \gt 0\) and \(n_{0} \gt 0\) such that for all \(n \geq n_{0}\):
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\log_{b}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 3: \(f(n) = \Omega\left(n^{\log_{b}(a)+\epsilon}\right)\). Since we have \(f(n) = \Omega\left(n^{\log_{b}(a)+\epsilon}\right)\), which implies that \(f\left(\frac{n}{b^{i}}\right) = \Omega\left(\left(\frac{n}{b^{i}}\right)^{\log_{b}(a)+\epsilon}\right)\). Subtituting into \(g(n)\) yields
Using the meaning of the \(\Omega\)-notation, there exist constants \(c \gt 0\) and \(n_{0} \gt 0\) such that for all \(n \geq n_{0}\):
\(\sum_{i=0}^{\lfloor\log_{b}(n)\rfloor-1} \left(\frac{1}{b^{\epsilon}}\right)^{i}\) is a geometric series with the common ratio \(\frac{1}{b^{\epsilon}}\) which decays exponentially, and can be bounded below as follows:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded below by constant multiples of \(f(n)=n^{\log_{b}(a)+\epsilon}\), we can conclude that
Since \(af\left(\frac{n}{b}\right) \leq cf(n)\), it follows that \(f\left(\frac{n}{b}\right) \leq \frac{c}{a}f(n)\).
Now, iterate the recurrence
Iterate the recurrence once more
By continuing this process, after \(i\) iterations, we have:
Thus, we have:
Subtituting into \(g(n)\) yields
\(\sum_{i=0}^{\lfloor\log_{b}(n)\rfloor-1} c^{i}\) is a geometric series with the common ratio \(b^{\epsilon}\) which decays exponentially because \(c \lt 1\), and can be bounded above as follows:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above by constant multiples of \(f(n)\), we can conclude that
We have \(g(n) = \Omega\left(f(n)\right)\) and \(g(n) = O\left(f(n)\right)\), which implies that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
In the Master Theorem, as given above, there is a gap between cases 1 and 2, and a gap between cases 2 and 3.
The recurrence
and
does not exactly fit the form of the master theorem above.
The Master Theorem works well when \(f(n)\) is a simple polynomial function, like \(n^{d}\), but in this case, \(f(n)=n\log_{2}(n)\) involves both a polynomial part and a logarithmic factor, which is more complicated.
To handle recurrences like this, we need to use an extension of case 2 of the Master Theorem, known as the Generalized Master Theorem or Extended Master Theorem, which can handle additional logarithmic factors.
Let \(a \geq 1\) and \(b \gt 1\) be integer constants, and let \(f(n)\) be an asymptotically positive function. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
Consider the tree of recursive calls made by the algorithm:
- At level \(0\) (the root), there is one call with an input size of \(n\). The time required for this call, excluding the time needed by the recursive calls it spawns (i.e., the time to divide up its input and to combine the results of the calls it makes), is \(f(n)\).
- At level \(1\), The single call at level 0 spawns \(a\) recursive calls, each working on a subproblem of size \(\frac{n}{b}\). Each of these calls requires \(f\left(\frac{n}{b}\right)\) time, excluding their own recursive calls. The total time required at this level is therefore \(af\left(\frac{n}{b}\right)\).
- At level \(2\), The \(a\) calls from level \(1\) each spawn \(a\) new calls, resulting in \(a^{2}\) calls at this level. Each of these calls works on an input of size \(\frac{n}{b^{2}}\). The total time required at this level is \(a^{2}f\left(\frac{n}{b^{2}}\right)\).
- In general, at level \(i\), there are \(a^{i}\) calls, each working on an input of size \(\frac{n}{b^{i}}\). The total time required for all calls at this level is \(a^{i}f\left(\frac{n}{b^{i}}\right)\).
- This process continues until \(i = \lfloor\log_{b}(n)\rfloor\), where the input size reduces to \(1\). At this base case, the problem is small enough to be solved directly without further recursive calls.
Thus, the total time required for all the calls at all levels is:
Let
Case 2a: \(f(n) = \Theta\left(n^{\log_{b}(a)} \log_{b}^{k}(n)\right)\) and \(k \gt -1\). Since we have \(f(n) = \Theta\left(n^{\log_{b}(a)} \log_{b}^{k}(n)\right)\), which implies that \(f\left(\frac{n}{b^{i}}\right) = \Theta\left(\left(\frac{n}{b^{i}}\right)^{\log_{b}(a)} \log_{b}^{k}\left(\frac{n}{b^{i}}\right)\right)\). Subtituting into \(g(n)\) yields
Using the meaning of the \(\Theta\)-notation, there exist constants \(c_{1} \gt 0\), \(c_{2} \gt 0\) and \(n_{0} \gt 0\) such that for all \(n \geq n_{0}\):
The expression \(\sum_{i=0}^{\log_{b}(n)-1} \left(\log_{b}(n)-i\right)^{k}\) can be simplified by reindexing the terms to make the summation easier to analyze. Define \(j=\log_{b}(n)−i\), which implies \(i=\log_{b}(n)−j\). When \(i=0\), \(j=\log_{b}(n)\), and when \(i=\log_{b}(n)-1\), \(j=1\). Rewriting the summation in terms of \(j\), we have:
Subtituting into \(g(n)\) yields
\(\sum_{j=1}^{\log_{b}(n)} j^{p}\) is a sum of powers of integers, and can be bounded both below and above using integral approximation:
Subtituting into \(g(n)\) yields
To factor out \(\log_{b}^{p+1}(n)\) from \(\left(\log_{b}(n)+1\right)^{p+1}\), we use the binomial expansion.
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\log_{b}^{p+1}(n)\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 2b: \(f(n) = \Theta\left(n^{\log_{b}(a)} \log_{b}^{k}(n)\right)\) and \(k=-1\). Since we have \(f(n) = \Theta\left(n^{\log_{b}(a)} \log_{b}^{k}(n)\right)\), which implies that \(f\left(\frac{n}{b^{i}}\right) = \Theta\left(\left(\frac{n}{b^{i}}\right)^{\log_{b}(a)} \log_{b}^{k}\left(\frac{n}{b^{i}}\right)\right)\). Subtituting into \(g(n)\) yields
Using the meaning of the \(\Theta\)-notation, there exist constants \(c_{1} \gt 0\), \(c_{2} \gt 0\) and \(n_{0} \gt 0\) such that for all \(n \geq n_{0}\):
The series \(\sum_{i=0}^{\log_{b}(n)-1} \frac{1}{\log_{b}(n)-i}\) can be simplified by reindexing the terms to make the summation easier to analyze. Define \(j=\log_{b}(n)−i\), which implies \(i=\log_{b}(n)−j\). When \(i=0\), \(j=\log_{b}(n)\), and when \(i=\log_{b}(n)-1\), \(j=1\). Rewriting the summation in terms of \(j\), we have:
Subtituting into \(g(n)\) yields
The series \(\sum_{j=1}^{\log_{b}(n)} \frac{1}{j}\) is a partial sum of the harmonic series.
The harmonic series, represented by \(H_{n}=\sum_{j=1}^{n} \frac{1}{j}\), can be bounded as follows:
where \(H_{n}\) is the \(n\)-th harmonic number.
As a result, the partial sum of \(\sum_{j=1}^{\log_{b}(n)} \frac{1}{j}\) satisfies the inequality:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\log_{b}(\log_{b}(n))\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
Case 2c: \(f(n) = \Theta\left(n^{\log_{b}(a)} \log_{b}^{k}(n)\right)\) and \(k \lt -1\). Since we have \(f(n) = \Theta\left(n^{\log_{b}(a)} \log_{b}^{k}(n)\right)\), which implies that \(f\left(\frac{n}{b^{i}}\right) = \Theta\left(\left(\frac{n}{b^{i}}\right)^{\log_{b}(a)} \log_{b}^{k}\left(\frac{n}{b^{i}}\right)\right)\). Subtituting into \(g(n)\) yields
Using the meaning of the \(\Theta\)-notation, there exist constants \(c_{1} \gt 0\), \(c_{2} \gt 0\) and \(n_{0} \gt 0\) such that for all \(n \geq n_{0}\):
Let \(k=-r\), which simplifies the \(g(n)\) to
The expression \(\sum_{i=0}^{\log_{b}(n)-1} \frac{1}{\left(\log_{b}(n)-i\right)^{r}}\) can be simplified by reindexing the terms to make the summation easier to analyze. Define \(k=\log_{b}(n)−i\), which implies \(i=\log_{b}(n)−k\). When \(i=0\), \(k=\log_{b}(n)\), and when \(i=\log_{b}(n)-1\), \(k=1\). Rewriting the summation in terms of \(k\), we have:
The series \(\sum_{k=1}^{\log_{b}(n)} \frac{1}{k^{r}}\) is a partial sum of the p-series.
For \(r \gt 1\), the p-series, represented by \(\sum_{k=1}^{n} \frac{1}{k^{r}}\), converges and can be bounded above and below using integral approximation:
As a result, the partial sum of \(\sum_{k=1}^{\log_{b}(n)} \frac{1}{k^{r}}\) satisfies the inequality:
Subtituting into \(g(n)\) yields
Since \(g(n)\) is bounded both above and below by constant multiples of \(n^{\log_{b}(a)}\), we can conclude that
Subtituting into \(T(n)\) yields
Thus, the asymptotic growth of \(T(n)\) is
In the preceding analysis we assumed that the input size \(n\) is a positive integer that is a power of \(b \gt 1\), and therefore that \(b\) is also an integer. It turns out that, essentially, the Master Theorem holds even if \(n\) is not necessarily a power of \(b\) and \(b \gt 1\) is a real number, not necessarily an integer. If \(n\) is not a power of \(b\), however, the previous recurrence expression is not a legitimate recurrence: sooner or later as we keep dividing the input size by \(b\) we will end up with a non-natural number and then the recurrence is not defined.
In general, a divide-and-conquer algorithm breaks a problem of size \(n\) into a subproblems, \(a_{1}\) of which have size \(\lceil \frac{n}{b} \rceil\) and \(a_{2}\) have size \(\lfloor \frac{n}{b} \rfloor\), for some non-negative integers \(a_{1}\) and \(a_{2}\) such that \(a_{1} + a_{2} = a\). Thus, the following recurrence describes the running time of such an algorithm, when \(n\) is not necessarily a power of \(b\), and \(b \gt 1\) is not necessarily an integer.
where \(a_{1}\), \(a_{2}\) are non-negative integers such that \(a_{1} + a_{2} = a\).
or
or
We need to extend our analysis to allow situations in which floors and ceilings appear in the master recurrence.
Let \(a_{1} + a_{2} = a \geq 1\) and \(b \gt 1\) be real constants, and let \(f(n)\) be an asymptotically positive function. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
The monotonicity property allows us to apply techniques like induction to prove bounds for \(T(n)\).
The running time \(T(a)\) represents the worst-case running time for a problem of size \(a\) and \(T(b)\) represents the worst-case running time for a problem of size \(b\).
Every input of size \(a\) can be seen as a part (or subset) of the inputs of size \(b\), because if \(a \leq b\), any instance of the problem that can be solved for \(a\)-sized inputs can also be considered a part of the problem for \(b\)-sized inputs.
For example, if a problem involves sorting an array, the set of possible arrays of size \(a\) is a subset of the set of possible arrays of size \(b\) when \(a \lt b\).
Therefore, the worst-case running time for the smaller problem (size \(a\)) is essentially a subset of the worst-case running time for the larger problem (size \(b\)). In other words, the worst-case scenario for a problem of size \(a\) will not exceed the worst-case scenario for a problem of size \(b\).
Since \(\lfloor\frac{n}{b}\rfloor \leq \lceil\frac{n}{b}\rceil \lt \frac{n}{b} + 1\), we replace \(\lfloor\frac{n}{b}\rfloor\) with \(\lceil\frac{n}{b}\rceil\) in the recurrence relation because \(\lceil\frac{n}{b}\rceil\) is larger, ensuring an upper bound on the running time. Furthermore, the size of \(\lceil\frac{n}{b}\rceil\) can be bounded by \(\frac{n}{b} + 1\). This simplification helps establish a tighter upper bound for analyzing the recurrence.
To simplify the recurrence and analyze it more effectively, we apply the change-of-function trick by defining a new function:
This simplifies the recurrence:
Summarizing the above inequalities we have
Applying this inequality repeatedly, and using the geometric series formulas as in the proof of previous theorem we get that
By definition of \(S\), \( T(n) = S(n-l) \). Therefore
Let \(a \geq 1\) and \(b \gt 1\) be real constants, and let \(f(n)\) be an asymptotically positive function. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence.
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
The monotonicity property allows us to apply techniques like induction to prove bounds for \(T(n)\).
Since \(\lceil\frac{n}{b}\rceil \lt \frac{n}{b} + 1\), the size of \(\lceil\frac{n}{b}\rceil\) can be bounded by \(\frac{n}{b} + 1\). This simplification helps establish a tighter upper bound for analyzing the recurrence.
To simplify the recurrence and analyze it more effectively, we apply the change-of-function trick by defining a new function:
This simplifies the recurrence:
Summarizing the above inequalities we have
Applying this inequality repeatedly, and using the geometric series formulas as in the proof of previous theorem we get that
By definition of \(S\), \( T(n) = S(n-l) \). Therefore
Let \(a \geq 1\) and \(b \gt 1\) be real constants, and let \(f(n)\) be an asymptotically positive function. Suppose \(T(n)\) is defined for the positive real to satisfy the recurrence
Then the growth of \(T(n)\) can be asymptotically determined under the following assumptions
The monotonicity property allows us to apply techniques like induction to prove bounds for \(T(n)\).
Since \(\lfloor\frac{n}{b}\rfloor \leq \lceil\frac{n}{b}\rceil \lt \frac{n}{b} + 1\), we replace \(\lfloor\frac{n}{b}\rfloor\) with \(\lceil\frac{n}{b}\rceil\) in the recurrence relation because \(\lceil\frac{n}{b}\rceil\) is larger, ensuring an upper bound on the running time. Furthermore, the size of \(\lceil\frac{n}{b}\rceil\) can be bounded by \(\frac{n}{b} + 1\). This simplification helps establish a tighter upper bound for analyzing the recurrence.
To simplify the recurrence and analyze it more effectively, we apply the change-of-function trick by defining a new function:
This simplifies the recurrence:
Summarizing the above inequalities we have
Applying this inequality repeatedly, and using the geometric series formulas as in the proof of previous theorem we get that
By definition of \(S\), \( T(n) = S(n-l) \). Therefore
Limitations of Master Theorem
For master theorem to work:
- The Master Theorem only applies to recurrence relations of the form \(T(n) = aT(\frac{n}{b}) + f(n)\). It cannot be used for other forms of recurrence relations.
- The parameters \(a\) and \(b\) must be positive and greater than one. The theorem is not applicable if \(a \leq 0\) or \(b \leq 1\).
- The function \(f(n)\) must be asymptotically positive. If \(f(n)\) is not positive for large \(n\), the theorem cannot be applied.
- The Master Theorem is less effective for non-polynomial \(f(n)\). If \(f(n)\) does not fit into a polynomial form, the theorem might not provide a straightforward solution.
- The theorem assumes that the problem size is reduced by a constant factor \(b\). It cannot handle cases where the subdivision factor \(b\) varies with \(n\).
Examples
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=3\)
- \(b=2\)
- \(f(n)=n^{2}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(3)}=n^{1.585}\). Since \(f(n)=n^{2}\) is asymptotically larger than \(n^{log_{b}(a)}\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=3\), \(b=2\), and \(f(n)=n^{2}\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
Therefore, the inequality holds for any \(c\) such that \(\frac{3}{4} \leq c \lt 1\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=4\)
- \(b=2\)
- \(f(n)=n^{2}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(4)}=n^{2}\). Since \(f(n)=n^{2}\) is asymptotically equal to \(n^{log_{b}(a)}=n^{2}\), we conclude that this is Case 2 of the General Master Theorem.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=1\)
- \(b=2\)
- \(f(n)=2^{n}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(1)}=1\). Since \(f(n)=2^{n}\) is asymptotically faster than \(n^{log_{b}(a)}=1\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=1\), \(b=2\), and \(f(n)=2^{n}\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
As \(n\) becomes large, \(2^{-\frac{n}{2}}\) approaches \(0\). Therefore, the inequality holds for any \(c\) such that \(0 \lt c \lt 1\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
The Master Theorem cannot be applied directly to this recurrence because:
- \(a=2^{n}\) is not a constant.
- \(f(n)=n^{n}\) grows super-exponentially, which is much faster than any polynomial or standard exponential growth.
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=16\)
- \(b=4\)
- \(f(n)=n\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(16)}=n^{4}\). Since \(f(n)=n\) is asymptotically smaller than \(n^{log_{b}(a)}=n^{4}\), we conclude that this is Case 1 of the General Master Theorem.
Thus, \(n^{log_{b}(a)}\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=2\)
- \(b=2\)
- \(f(n)=n\log(n)\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(2)}=n\). Since \(f(n)=n\log(n)\) is asymptotically faster than \(n^{log_{b}(a)}=n\) due to the additional logarithmic factor, we conclude that this is Case 2a of the Extended General Master Theorem.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=2\)
- \(b=2\)
- \(f(n)=\frac{n}{\log(n)}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(2)}=n\). Since \(f(n)=\frac{n}{\log(n)}\) is asymptotically slower than \(n^{log_{b}(a)}=n\) due to the additional logarithmic factor, we conclude that this is Case 2b of the Extended General Master Theorem.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=2\)
- \(b=4\)
- \(f(n)=n^{0.58}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{4}(2)}=n^{0.5}\). Since \(f(n)=n^{0.58}\) is asymptotically faster than \(n^{log_{b}(a)}=n^{0.5}\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=2\), \(b=4\), and \(f(n)=n^{0.58}\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
Therefore, the inequality holds for any \(c\) such that \(\frac{1}{2^{0.16}} \leq c \lt 1\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
The Master Theorem cannot be applied to this recurrence because \(a \lt 1\).
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=16\)
- \(b=4\)
- \(f(n)=n!\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{4}(16)}=n^{2}\). Since \(f(n)=n!\) is asymptotically faster than \(n^{log_{b}(a)}=n^{2}\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=16\), \(b=4\), and \(f(n)=n!\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
The factor \(16\left(\frac{n}{4}\right)!\) grows slower than \(n!\) for large \(n\). Therefore, the inequality holds for any \(c\) such that \(0 \lt c \lt 1\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=\sqrt{2}\)
- \(b=2\)
- \(f(n)=\log(n)\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(\sqrt{2})}=n^{\frac{1}{2}}\). Since \(f(n)=\log(n)\) is asymptotically slower than \(n^{log_{b}(a)}=n^{\frac{1}{2}}\), we conclude that this is Case 1 of the General Master Theorem.
Thus, \(n^{log_{b}(a)}\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=3\)
- \(b=2\)
- \(f(n)=n\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(3)}=n^{1.585}\). Since \(f(n)=n\) is asymptotically smaller than \(n^{log_{b}(a)}=n^{1.585}\), we conclude that this is Case 1 of the General Master Theorem.
Thus, \(n^{log_{b}(a)}\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=3\)
- \(b=3\)
- \(f(n)=\sqrt{n}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{3}(3)}=n\). Since \(f(n)=\sqrt{n}\) is asymptotically slower than \(n^{log_{b}(a)}=n\), we conclude that this is Case 1 of the General Master Theorem.
Thus, \(n^{log_{b}(a)}\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=3\)
- \(b=4\)
- \(f(n)=n\log(n)\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{4}(3)}=n^{0.973}\). Since \(f(n)=n\log(n)\) is asymptotically faster than \(n^{log_{b}(a)}=n^{0.973}\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=3\), \(b=4\), and \(f(n)=n\log(n)\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
As \(n\) becomes large, \(\frac{3\log(4)}{4\log(n)}\) approaches \(0\). Therefore, the inequality holds for any \(c\) such that \(\frac{3}{4} \leq c \lt 1\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=3\)
- \(b=3\)
- \(f(n)=\frac{n}{2}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{3}(3)}=n\). Since \(f(n)=\frac{n}{2}\) is asymptotically equal to \(n^{log_{b}(a)}=n\), we conclude that this is Case 2 of the General Master Theorem.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=6\)
- \(b=3\)
- \(f(n)=n^{2}\log(n)\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{3}(6)}=n^{1.631}\). Since \(f(n)=n^{2}\log(n)\) is asymptotically faster than \(n^{log_{b}(a)}=n^{1.631}\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=6\), \(b=3\), and \(f(n)=n^{2}\log(n)\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
As \(n\) becomes large, \(\frac{2\log(3)}{3\log(n)}\) approaches \(0\). Therefore, the inequality holds for any \(\frac{2}{3} \leq c \lt 1\). Since such a \(c\) exists, the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=4\)
- \(b=2\)
- \(f(n)=\frac{n}{\log(n)}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(4)}=n^{2}\). Since \(f(n)=\frac{n}{\log(n)}\) is asymptotically slower than \(n^{log_{b}(a)}=n^{2}\), we conclude that this is Case 1 of the General Master Theorem.
Thus, \(n^{log_{b}(a)}\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
The Master Theorem cannot be applied to this recurrence because \(f(n)\) is not positive.
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=7\)
- \(b=3\)
- \(f(n)=n^{2}\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{3}(7)}=n^{1.7712}\). Since \(f(n)=n^{2}\log(n)\) is asymptotically faster than \(n^{log_{b}(a)}=n^{1.7712}\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=7\), \(b=3\), and \(f(n)=n^{2}\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
Therefore, the inequality holds for any \(c\) such that \(\frac{2}{3} \leq c \lt 1\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=4\)
- \(b=2\)
- \(f(n)=\log(n)\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(4)}=n^{2}\). Since \(f(n)=\log(n)\) is asymptotically slower than \(n^{log_{b}(a)}=n^{2}\), we conclude that this is Case 1 of the General Master Theorem.
Thus, \(n^{log_{b}(a)}\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=1\)
- \(b=2\)
- \(f(n)=n(2-\sin(n))\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(1)}=1\). Since \(f(n)=n(2-\sin(n))\) is asymptotically faster than \(n^{log_{b}(a)}=1\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=1\), \(b=2\), and \(f(n)=n(2-sin(n))\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
Consider \(n = \pi k\), where \(k \geq 0\) and arbitrarily large. The value of \(\sin(n)\) will always be \(0\) and the value of \(\sin(\frac{n}{2})\) alternates depending on \(k \mod 4\).
If \(k \equiv 0 \pmod{4}\), then \(\sin(\frac{k\pi}{2}) = 0\). The inequality becomes:
If \(k \equiv 1 \pmod{4}\), then \(\sin(\frac{k\pi}{2}) = 1\). The inequality becomes:
If \(k \equiv 2 \pmod{4}\), then \(\sin(\frac{k\pi}{2}) = 0\). The inequality becomes:
If \(k \equiv 3 \pmod{4}\), then \(\sin(\frac{k\pi}{2}) = -1\). The inequality becomes:
Thus, for all values of \(k\), the largest lower bound on \(c\) is \(c \geq \frac{3}{4}\) and the regularity condition is satisfied.
Thus, \(f(n)\) dominates the growth of \(T(n)\), and we conclude:
Example
Consider the recurrence
Solution
From the given recurrence:
- \(a=1\)
- \(b=2\)
- \(f(n)=n(2-\cos(n))\)
From the Master Theorem, we calculate \(n^{log_{b}(a)}=n^{log_{2}(1)}=1\). Since \(f(n)=n(2-\cos(n))\) is asymptotically faster than \(n^{log_{b}(a)}=1\), we conclude that this is Case 3 of the General Master Theorem.
In Case 3 of the General Master Theorem, we must check the regularity condition \(af(\frac{n}{b}) \leq cf(n)\) for some \(c \lt 1\) and all \(n\) sufficiently large.
Substitute the values \(a=1\), \(b=2\), and \(f(n)=n(2-\cos(n))\) into the expression \(af(\frac{n}{b}) \leq cf(n)\), we get:
Consider \(n = \pi k\), where \(k \geq 0\) and arbitrarily large. The value of \(\cos(n)\) is \((-1)^{k}\) and the value of \(\cos(\frac{n}{2})\) alternates depending on \(k \mod 4\).
If \(k \equiv 0 \pmod{4}\), then \(\cos(\frac{k\pi}{2}) = 1\) and \(\cos(n) = 1\). The inequality becomes:
If \(k \equiv 1 \pmod{4}\), then \(\cos(\frac{k\pi}{2}) = 0\) and \(\cos(n) = -1\). The inequality becomes:
If \(k \equiv 2 \pmod{4}\), then \(\cos(\frac{k\pi}{2}) = -1\) and \(\cos(n) = 1\). The inequality becomes:
If \(k \equiv 3 \pmod{4}\), then \(\cos(\frac{k\pi}{2}) = 0\) and \(\cos(n) = -1\). The inequality becomes:
Thus, for all values of \(k\), the largest lower bound on \(c\) is \(c \geq \frac{3}{2}\) and it violates the condition that \(c \lt 1\).
Since the regularity condition is not satisfied, this implies that the recurrence does not fit into Case 3 of the General Master Theorem.