Skip to the content of the web site.

Run-time Analysis

This is a quick review of run-time analysis.

A Different Approach

To begin, let us consider the following relationship between people: their age. Given two people, Ali and Bailey, either:

  • Ali is younger than Bailey,
  • Ali is the same age as Bailey, or
  • Ali is older than Bailey.

This is a weak ordering. It allows us to group everyone into age-classes (equivalence classes) where everyone in the same age-class is the same age: for example, a typical 2nd-year cohort may look like what is shown in Figure 1.


Figure 1. A typical fall 2B cohort broken up into age-classes.

If all we care about is the age of an individual, we don't need the entire group. Instead, we can choose a representative (representative element) for each age-class, as is shown in Figure 2.


Figure 2. A typical fall 2B cohort with a representative chosen from each age-class.

Now, we don't have to worry about the entire cohort, but rather—so long as age is our primary concern—we can just consider the representatives, as is shown in Figure 3.


Figure 3. The representatives chosen from each age-class.

Now, in order to compare the age of two arbitrary people in the cohort, I just have to find the representatives for each of them and compare the age of the representatives. For example, suppose the seven representatives chosen above are:

Ali, Bailey, Casey, Dylan, Emerson, Finley, and Douglas

respectively. Leslie is the same age as Dylan and Morgan is the same age is Bailey; thus, as Dylan is older than Bailey, it follows that Leslie is older than Morgan. Similarly, if Riley and Sidney are both the same age as Emerson, we may conclude that Riley is the same age as Sidney.

Symbolism

We will use the following symbolism: having chosen our representatives, then

  • If Willi is strictly younger than, say, Casey, we will say Willi = o(Casey),
    that is, Willi is little-oh of Casey;
  • If Willi is either younger than or possibly the same age as Casey, we will say Will = O(Casey),
    that is, Willi is big-Oh of Casey;
  • If Willi is the same age as Casey, we will say Willi = Θ(Casey),
    that is, Willi is big-Theta of Casey;
  • If Willi is either the same age as or possibly older than Casey, we will say Will = Ω(Casey),
    that is, Willi is big-Omega of Casey; and
  • If Willi is strictly older than Casey, we will say Willi = ω(Casey),
    that is, Willi is little-omega of Casey.

It is also likely true that for most people in this cohort, they are all o(Douglas).

Weak Ordering

This example demonstrates all of the properties of a weak ordering:

  1. For any object in the ordering, there is an entire class of objects all of which are equivalent according to one property or relation shared by all objects.
  2. For all objects having the same property or are related to each other, we can choose one representative element if necessary,
  3. Given two objects that are not related to each other, one of the two must precede the other.

Why do you care? Well, for example, almost every container in the Standard Template Library (STL) in C++ assumes that the objects stored are related using a weak ordering.

Why do you care? Also, understanding weak orderings is necessary for a proper understanding of run-time analysis.

Run-time Analysis

We will begin analyzing run-times by looking at a few simple examples of code where we can deduce the run-time.

Now, if you consider the C code that pops the top off of a queue, it looks like this:

void pop( struct Queue *q ) {
	if ( q->queue_size != 0 ) {
		q->head = (q->head + 1) % q->array_capacity;
		--(q->queue_size);
	}
}

Here, all we do is make one comparison, and if it is true, we update two variables. In either case, only a fixed number of instructions are executed and the number of instructions executed does not depend on the number of objects in the queue: thus, we say the run-time is Θ(1). Any operation on a queue that takes a fixed number of instructions regardless of the number of items in the queue can be said to be in the same run-time class and the representative we choose to represent all of these is the member representing one instruction: 1.

Next, consider the following C code that counts the number of times that a particular pointer appears in a queue:

int count( struct Queue *q, void *ptr ) {
	if ( q->queue_size == 0 ) {
		return 0;
	}

	int c = 0;

	for ( i = q->head; true; i = (i + 1) % q->array_capacity ) {
		if ( q->array[i] == ptr ) {
			++c;
		}

		if ( i == q->tail ) {
			return c;
		}
	}

	assert( false ); // we should never get here...
}

The for loop checks all tex:$$n$$ entries of the array. It will never end early as every entry must be checked. Consequently, the number instructions that are run is something of the form tex:$$an + b$$ where tex:$$a$$ and tex:$$b$$ are constants. If the size of the queue is doubled, the amount of time this function must take is also approximately doubled. If the number of entries in the queue is increased by a factor of 10, the run-time will also be increased by a factor of 10. These are the same characteristics of any algorithm that takes exactly tex:$$n$$ operations to run. Thus, we will say that the above function runs in Θ(tex:$$n$$) time where tex:$$n$$ is the number of entries in the queue.

Perspective

If tex:$$f(n)$$ describes the run-time of one algorithm and tex:$$g(n)$$ describes the run-time of another algorithm, if tex:$$f(n)$$ grows faster than tex:$$g(n)$$, then the first algorithm requires more instructions to execute, and therefore we will say that the first algorithm is slower than the second. We will on occasion mix terminology: "a faster algorithm has a run-time that grows slower than another" and "a slower algorithm has a run-time that grows faster than a faster algorithm".

Adding some Rigour

At this point, we have noted that associated with each function is an algebraic expression that gives the number of operations that must be computed. For any algorithm where the run-time or memory depends on a parameter tex:$$n$$ (perhaps an array of size tex:$$n$$ or an tex:$$n \times n$$ matrix), we will represent the algebraic expression associated with the algorithm by the mathematical function tex:$$f(n)$$. Interestingly enough, for the most part, we are not all that interested in the exact details of this function. Just like we may not be interested in students other than their age, we are only interested in comparing functions based on their growth characteristics.

Thus, we will say that two functions tex:$$f(n)$$ and tex:$$g(n)$$ both grow at the same rate if

tex:$$0 < \lim_{n \rightarrow \infty} \frac{f(n)}{g(n)} < \infty$$.

That is, two functions grow at the same rate if, in the limit, the ratio does not go to either 0 or infinity. In most cases, it actually goes to a non-zero constant.

For example, all of the following functions have the same run-time:

tex:$$3n + 5$$, tex:$$58n + 7$$, tex:$$92n + 523$$, tex:$$n$$, tex:$$37n + 43$$, tex:$$854n + 923$$, tex:$$39n + 534\ln(n) + 43$$, and tex:$$58n + 2\sqrt{n} + 532$$.

As they all grow at the same rate, we will pick a representative, tex:$$n$$, and we will say that, for example, tex:$$37n + 43 = \Theta(n)$$. In this case,

tex:$$\lim_{n \rightarrow \infty} \frac{37n + 43}{n} = 37$$.

The following doubly nested for-loop multiples a vector by a matrix:

double *matrix_vector_multiplication( double **M, double *v, int n ) {
	int i, j;
	double *u = malloc( n*sizeof( double ) );

	for ( i = 0; i < n; ++i ) {
		u[i] = 0.0;

		for ( j = 0; j < n; ++j ) {
			u[i] += M[i][j]*v[j];
		}
	}

	return u;
}

This requires tex:$$n^2$$ multiplications and tex:$$n^2$$ additions. Consequently, if the value of tex:$$n$$ is doubled, then the algorithm will take four times as long to run. If the value of tex:$$n$$ is increased by a factor of 10, the run-time will be increased by a factor of 100. Consequently, we will say that this algorithm runs in tex:$$\Theta\left (n^2 \right )$$ time.

Without proof, the algorithm you learned in your linear algebra course runs in tex:$$\Theta\left(n^3\right)$$ time. There is an algorithm that will multiply two tex:$$n \times n$$ matrices slightly more quickly, namely, the Strassen algorithm, that runs in tex:$$\Theta\left(n^{\log_2(7)}\right )$$ time where tex:$$\log_2(7) \approx 2.81$$.

The Weak Ordering of Run Times

We have seen four five times:

tex:$$1$$, tex:$$n$$, tex:$$n^2$$, tex:$$n^{\log_2(7)}$$, and tex:$$n^3$$.

In the long run, an algorithm that is tex:$$\Theta(n)$$ will run faster than an algorithm that is tex:$$\Theta(n^2)$$. In fact, consider any two run-times where the dominant terms are tex:$$n^p$$ and tex:$$n^q$$ where tex:$$p < q$$. Then

tex:$$\lim_{n \rightarrow \infty} \frac{n^p}{n^q} = \lim_{n \rightarrow \infty} n^{p - q}$$

and because tex:$$p < q$$, tex:$$p - q < 0$$ and thus

tex:$$\lim_{n \rightarrow \infty} \frac{n^p}{n^q} = \lim_{n \rightarrow \infty} n^{p - q} = 0$$.

Therefore, we can order all polynomial-time algorithms from the fastest run-times (tex:$$\Theta(1)$$) to ever slower-running algorithms (at least, slower running in the long term). To say that one function tex:$$f(n)$$ runs strictly faster than another function tex:$$g(n)$$ in the long term, we will say

tex:$$f(n) = o(g(n))$$

which says that

tex:$$\lim_{n \rightarrow \infty} \frac{f(n)}{g(n)} = 0$$.

For example, tex:$$1 = o(n)$$, tex:$$n = o(n^2)$$, tex:$$n^2 = o\left(n^{\log_2(7)}\right)$$, and tex:$$n^{\log_2(7)}\right = o(n^3)$$.

How About Logarithms?

It is reasonably easy to show that tex:$$1 = o(ln(n))$$ by definition of the limit:

tex:$$\lim_{n \rightarrow \infty} \frac{1}{\ln(n)} = 0$$.

More interesting is that the logarithm grows slower than any polynomial tex:$$n^p$$ where tex:$$p > 0$$, as we must use l'Hôpital's rule:

tex:$$\lim_{n \rightarrow \infty} \frac{\ln(n)}{n^p} = \lim_{n \rightarrow \infty} \frac{1/n}{pn^{p - 1}} = \frac{1}{p} \lim_{n \rightarrow \infty} \frac{1}{n^p} = 0$$

where the last follows because tex:$$p > 0$$.

Therefore, we may say that, for example, tex:$$\ln(n) = o(n^{0.1})$$, tex:$$\ln(n) = o\left(\sqrt{n}\right)$$, and tex:$$\ln(n) = o(n)$$.

Similarly, tex:$$n = o(n \ln(n))$$ but tex:$$n \ln(n) = o\left(n^{1.1}\right)$$ and tex:$$n \ln(n) = o(n^2)$$.

Now, a binary search on a sorted list runs in tex:$$\Theta(\ln(n))$$ time and three very fast sorting algorithms, quick sort, heap sort, and merge sort, all run in tex:$$n \ln(n)$$ time. Selection sort, however, runs in tex:$$\Theta(n^2)$$ time, and therefore selection sort is significantly slower than any one of quick, heap, or merge sort.

Therefore, our representative elements, from slowest growing (fastest running) to fastest growing (slowest running), are

tex:$$1$$, tex:$$\ln(n)$$, tex:$$\sqrt{n}$$, tex:$$n$$, tex:$$n \ln(n)$$, tex:$$n^2$$, tex:$$n^2 \ln(n)$$, tex:$$n^{\log_2(7)}$$, and tex:$$n^3$$.

Note, however, that because tex:$$log_b(n) = \frac{\ln(n)}{\ln(b)}$$, it follows that all logarithms, regardless of the base so long as tex:$$b > 1$$, grow at the same rate:

tex:$$\lim_{n \rightarrow \infty} \frac{\log_a(n)}{\log_b(n)} = \lim_{n \rightarrow \infty} \frac{\frac{\ln(n)}{\ln(a)}}{\frac{\ln(n)}{\ln(b)}} = \lim_{n \rightarrow \infty} \frac{\ln(b)}{\ln(a)} = \frac{\ln(b)}{\ln(a)}$$

which is a constant so long as tex:$$a$$ and tex:$$b$$ are constants.

Exercise: Normally, multiplying two tex:$$n$$-digit numbers requires an algorithm that runs in tex:$$\Theta(n^2)$$ time. The Karatsuba algorithm multiples two tex:$$n$$-digit numbers in tex:$$\Theta\left(n^{\log_2(3)}\right)$$ time. Where in the above list does this run-time fit?

What about Exponentials and Factorial Run Times

Suppose you write an algorithm that runs in exponential time: tex:$$\Theta(2^n)$$. The solution is very simple: in the words of Mr. Trump, "You're fired!" Any polynomial time algorithm grows more slowly than any exponential function: tex:$$n^{100} = o(2^n)$$—just apply l'Hôpital's rule one hundred times to confirm this. Also, if tex:$$1 < a < b$$, then tex:$$a^n = o(b^n)$$. Finally, tex:$$a^n = o(n!)$$ for any value of tex:$$a > 1$$.

There was a story of a subject who was offered a reward by a king. The subject asked that the king take a chess board and place one grain of barley on the first square, two grains of barley on the second, 22 on the third, 23 on the fourth, 24 on the fifth, and so on. The king agreed but when he realized he would have to pay out tex:$$2^{64} - 1 = 18446744073709551615$$ grains of barley (approximately tex:$$1.2 \times 10^{12}$$ metric tonnes or the current production of barley for eight-and-a-half millenniums), he promptly had the subject killed. There is, however, no reasonable moral to this story.

Similarly, a programmer told his manager that his algorithm was really fast: it took four nanoseconds to run on a problem of size 2, 8 ns to run on a problem of size 3, 16 ns to run on a problem of size 4, 32 ns to run on a problem of size 5, and so on. The manager was greatly impressed and earmarked the programmer a bonus; however, when the algorithm was fully implemented and it turned out it would take tex:$$2^{50} {\rm ns}$$, or thirteen days, to run on a problem of size 50, the programmer was promptly fired. You may draw your own lesson.

Analyzing Algorithms

Now, the next question is, how do we analyze an algorithm?

Operators

First, any single operator in C that does not involve a function call runs in tex:$$\Theta(1)$$ time.

Serial Execution

Next, if two blocks of instructions are run serially (i.e., one after the other), and the run-time of each of each of the two blocks is tex:$$\Theta(f)$$ and tex:$$\Theta(g)$$, it follows that the run-time of the two blocks run in series is tex:$$\Theta(f + g)$$. This this expression, however, may be simplified.

For example, without further knowledge into the details of two blocks, say that one runs in tex:$$\Theta(n)$$ and the other runs in tex:$$\Theta(n^2)$$. Thus, the two blocks run in serial run in tex:$$\Theta(n^2 + n)$$ time. However, tex:$$n^2 + n = \Theta(n^2)$$, and therefore, we may simplify this to saying the blocks running serially run in tex:$$\Theta(n^2)$$ time.

As a different example, suppose that we have four blocks of code:

  • The first is a check which runs in tex:$$\Theta( 1 )$$ time,
  • The next runs in tex:$$\Theta( min( m_1, m_2 ) )$$ time,
  • The next runs in tex:$$\Theta( m_1 )$$ time,
  • The next runs in tex:$$\Theta( 1 )$$ time, and
  • The last runs in tex:$$\Theta( m_1 + n_1 + n_2 )$$ time.

If these four blocks of code are run serially, the run-time would be:

tex:$$\Theta( 1 + min( m_1, m_2 ) + m_1 + 1 + m_1 + n_1 + n_2 )$$.

Examining this, with no further knowledge of the values of tex:$$m_1$$, tex:$$m_2$$, tex:$$n_1$$, and tex:$$n_2$$, we can still determine, for example, that tex:$$min( m_1, m_2 ) = O(m_1)$$, and thus, we can simplify this to

tex:$$\Theta( m_1 + n_1 + n_2 )$$.

If any one of these variables is increased in size, the run-time of the block of code would be increased. If, in addition, you are given the additional information that tex:$$n_1 = \Theta( max( m_1, m_2 ) )$$ and tex:$$n_2 = \Theta( max( m_1, m_2 ) )$$, then we can also deduce that tex:$$max( m_1, m_2 ) = \Theta( m_1 + m_2 )$$, and therefore

tex:$$\Theta( m_1 + n_1 + n_2 ) = \Theta( m_1 + m_1 + m_2 + m_1 + m_2 ) = \Theta( m_1 + m_2 )$$.

Now, you may object: where would such an algorithm ever come up and how could such apparently bizarre requirements appear? These are actually taken from standard assumptions about sparse matrices and the above is a run-time analysis of adding two sparse matrices: matrix_matrix_operations.cpp. Any time you use finite-element analysis to simulate a physical system, the software will be using sparse systems of linear equations.

One consequence of the previous observation is that any finite set of C instructions run serially must also run in tex:$$\Theta(1)$$ time. For example, the following block of C++ function

template <typename Type>
void Splay_tree<Type>::Node::zig_zag( typename Splay_tree<Type>::Node * &ptr_to_this ) {
	Node *l =   left;
	Node *lr =  left->right;
	Node *lrl = left->right->left;
	Node *lrr = left->right->right;

	ptr_to_this = lr;
	lr->left = l;
	lr->right = this;
	l->right = lrl;
	left = lrr;
}

is nothing more than eight assignments run serially. Consequently, we may say the run-time is tex:$$\Theta(1)$$.

Loops

A for-loop of the form:

	for ( i = 0; i < n; ++i ) {
		// Some Theta(1) code
	}

runs in tex:$$\Theta(n)$$ time. A for-loop of the following type:

	for ( i = 1; i < n; i *= 2 ) {
		// Some Theta(1) code
	}

runs in tex:$$\Theta(\ln(n))$$ time. Actually, it will run tex:$$\left \lfloor \log_2(n - 1) \right \rfloor$$ times, but

tex:$$\lim_{n \rightarrow \infty} \frac{\left \lfloor \log_2(n - 1) \right \rfloor}{\ln(n)} = \frac{1}{\ln(2)}}$$

which is finite.

Now, these are the two prototypical cases. Suppose you have a more complex loop:

	for ( i = 0; i < n; ++i ) {
		for ( j = 0; j < i; ++j ) {
			// Some Theta(1) code
		}
	}

So, the first time the outer loop is run, the inner loop does nothing, the second time, the inner loop iterates once, but the last time the outer loop iterates, the inner loop iterates tex:$$n - 1$$ times.

To solve such cases, we will simply use Maple. If the body of the inner loop runs in tex:$$\Theta(1)$$ time,

> sum( sum( 1, j = 0..(i - 1) ), i = 0..(n - 1) );

tex:$$\frac{1}{2}n^2 - \frac{1}{2}n$$

This represents tex:$$\sum_{i = 0}^{n - 1} \sum_{j = 0}^i 1$$ where the summand tex:$$1$$ is a representative element of the run-time equivalence class tex:$$\Theta(1)$$. The limits of integration are the limits of the for loop. These similarities are given in Figure 4. If the upper limit of the loop is i < upper, then the upper limit of the summation should be tex:$${\rm upper} - 1$$.


Figure 4. The relationship between the run-times, summation notation, and Maple sum function.

From above, we know that tex:$$\frac{1}{2}n^2 - \frac{1}{2}n = \Theta(n^2)$$ and thus we may say that this loop runs in tex:$$\Theta(n^2)$$ time: double the value of tex:$$n$$ and this doubly nested for-loop will take four times as long to run.

Modified Example

If the body of the loop had code that ran in tex:$$\Theta(m + j)$$ time instead of tex:$$\Theta(1)$$, we would change our calculation as follows:

> sum( sum( m + j, j = 0..(i - 1) ), i = 0..(n - 1) );

tex:$$\frac{1}{2}mn^2 - \frac{1}{2}mn + \frac{1}{6}n^3 - \frac{1}{2}n^2 + \frac{1}{3}n$$

Examining this, we see that the run-time is therefore tex:$$\Theta( mn^2 + n^3 )$$.

Now, you should be asking yourself the following question: why can I take just tex:$$m + j$$? Could it not be that tex:$$5324m + 1323j + 83715$$ might have a significantly longer run run-time than tex:$$m + j$$? This goes back to our weak ordering: the expression tex:$$m + j$$ represents all functions that grow at tex:$$\Theta(m + j)$$ time. We only need to choose one representative element from that class to observe how all members in the class will run. We could, for example, make the previous choice, and calculate

> sum( sum( 5324*m + 1323*j + 83715, j = 0..(i - 1) ), i = 0..(n - 1) );

tex:$$2662mn^2 - 2662mn + \frac{441}{2}n^3 + 41196n^2 - \frac{82833}{2}n$$

but in the end, careful observation will show that

tex:$$2662mn^2 - 2662mn + \frac{441}{2}n^3 + 41196n^2 - \frac{82833}{2}n = \Theta( mn^2 + n^3 )$$.

Another Example

Suppose our code looks like

	for ( i = 0; i < n; ++i ) {
		// Some Theta(1) code

		// Some Theta(ln(n)) code

		for ( j = 0; j < i; ++j ) {
			// Some Theta(ln(j)) code

			for ( k = 0; k < j; ++i ) {
				// Some Theta(1) code
			}
		}
	}

In Maple,

> sum( 1 + ln(n) + sum( ln(n) sum( 1, k = 0..(i - j - 1) ), j = 0..(i - 1) ), i = 0..(n - 1) );

tex:$$\frac{4}{3}n + \frac{1}{2}n \ln(n) + \frac{1}{2} \ln(n) n^2 - \frac{1}{2}n^2 + \frac{1}{6}n^3$$

represents the sum tex:$$\sum_{i = 0}^{n - 1} \left ( 1 + \ln(n) + \sum_{j = 0}^{i - 1} \left ( \ln(j) + \sum_{k = 0}^{j - 1} 1 \right) \right)$$.

You can see that in each case, each for loop is represented by a sum extending from the limits of the for loop. Then, for each code that has a particular run-time, we add a representative entry into the sum in the appropriate location.

By looking at the result from Maple, we can see that this is tex:$$\Theta(n^3)$$; however, if there was any question, we could always ask Maple what the asymptotic behaviour is:

> asympt( %, n );   # In Maple, % refers to the last calculated value.

tex:$$\frac{1}{6}n^3 + \left ( -\frac{1}{2} + \frac{1}{2}\ln(n) \right) n^2 + \left( \frac{1}{2} \ln(n) + \frac{4}{3} \right) n$$

As you can see, the dominant term appears first.

Exercise:

What are the run-times of the following two loops?

	for ( i = 0; i < n; ++i ) {
		for ( j = 0; j < i*i; ++j ) {
			// Some Theta(1) code
		}
	}

	for ( i = 0; i < n*n; ++i ) {
		for ( j = 0; j < i^2; ++j ) {
			// Some Theta(1) code
		}
	}

A free lunch at the Grad House for the first student in a course who correctly answers this question: What is the run-time of

	for ( i = 0; i < n; ++i ) {
		for ( j = 0; j < i^2; ++j ) {
			// Some Theta(1) code
		}
	}

? Only one guess per student and your answer should be sent to dwharder@uwaterloo.ca. If you do not get a response, you were either very late or your answer was incorrect; however, I will try to respond to all queries.

Divide-and-Conquer Algorithms

The only other really interesting case to look at are divide-and-conquer algorithms: algorithms that take a big problem, say, a problem of size tex:$$n$$, divides this big problem into tex:$$a$$ sub-problems each of size tex:$$\frac{n}{b}$$, and then takes the solutions to the tex:$$a$$ sub-problems and comes up with a solution to the bigger problem. The only issue is that the work required to either break the problem into tex:$$a$$ sub-problems or to work to take those tex:$$a$$ sub-problems and assemble a solution to the larger problem is tex:$$\Theta(n^k)$$. Now, the three constants tex:$$a$$, tex:$$b$$, and tex:$$k$$ are fixed for any specific problem. For example:

  1. A binary search uses tex:$$\Theta(1)$$ to divide a list into two and determine which of the sub-halves should be fixed. Therefore tex:$$a = 1$$, tex:$$b = 2$$, and tex:$$k = 0$$ (recall that tex:$$n^0 = 1$$).
  2. Merge sort uses tex:$$\Theta(1)$$ time to divided a problem into two sub-problems of approximately half the original size. Merge sort is then recursively called on both of these two sub-problems, but when they return, the two sorted sub-lists must still be merged: an tex:$$\Theta(n)$$ problem. Therefore tex:$$a = 2$$, tex:$$b = 2$$, and tex:$$k = 1$$—just like quick sort.
  3. Multiplying two tex:$$n$$-digit numbers normally requires tex:$$\Theta(n^2)$$ time (try multiplying two 10-digit numbers). The Karatsuba algorithm breaks the problem into three sub-problems requiring the multiplication of two tex:$$n/2$$-digit numbers. The final answer requires addition which is tex:$$\Theta(n)$$.

Calculating the run-time of such divide-and-conquer systems is straight-forward giving the three constants:

If tex:$$a < b^k$$,tex:$$O\left(n^k\right)$$
If tex:$$a = b^k$$,tex:$$O\left(\log_b(n)n^k\right)$$
If tex:$$a > b^k$$,tex:$$O\left(n^{\log_b(a)})\right)$$

In the first case, there are not that many sub-problems: tex:$$a$$ is sufficiently small, the effort can be calculated alone by the effort required to either break the problem into sub-problems or recombining the solutions to the sub-problems to get a solution to the larger problem.

Binary search and merge sort (Examples 1 and 2 above) both satisfy the second condition:

With binary search, tex:$$a = 1 = 2^0 = b^k$$, and thus the run-time is tex:$$O\left(\log_b(n)n^k\right) = O\left(\log_2(n)n^0\right) = O(\ln(n))$$.

With merge sort, tex:$$a = 2 = 2^1 = b^k$$, and thus the run-time is tex:$$O\left(\log_b(n)n^k\right) = O\left(\log_2(n)n^1\right) = O(n \ln(n))$$.

In the third case, Karatsuba's algorithm, tex:$$a = 3 > 2^1 = b^k$$, and thus the run-time is given by the third statement tex:$$O\left(n^{\log_b(a)})\right) = O\left(n^{\log_2(3)})\right) \approx O(n^{1.585})$$ which is significantly faster than normal multiplication:

  • With the usual multiplication algorithm, if there are 10 times more digits, it takes 100 times as long to run.
  • With Karatsuba's algorithm, if there are 10 times more digits, it takes only 14.40 times as long to run.