IC312: Data Structures (FA16)

Home Policy Calendar Units Assignments Resources

UNIT 2: Recursion

1 Recursion Review

1.1 Recursion!

Yes, it's back! You all learned about recursion in IC210, and you might have thought that it's pretty useless because you could do all the same things with iteration. And because recursion is so confusing.

Well, Data Structures (and most especially, trees) is where recursion becomes really, really useful. When we have more sophisticated data structures and algorithms, we need recursion to write simple code that works and we can understand.

The simple fact is this: with many, many data structures, recursion is WAY easier to cope with than iteration is. Embrace it, because it's about to make your life easier. Today, then, is an IC210-level reminder of what recursion is, and how it works

Recursion is a function calling itself. So, if we understand how function calls work, we'll understand recursion a lot better. Imagine the following set of functions (in pseudocode):

f1() {
print "inside f1"
}

f2() {
print "starting f2"
f1()
print "leaving f2"
}

main() {
print "starting main"
f2()
print "leaving main"
}


If we were to run this program, the output would be:

starting main
starting f2
inside f1
leaving f2
leaving main


To understand why, let's think about the call stack. The call stack works like a stack of plates: plates can only be placed on the top, and can only be removed from the top. For this reason, they're often referred to as LIFO (last-in-first-out). Instead of plates, though, the call stack has the functions that are running; when a function is called, it is placed on the top of the stack. A function ONLY runs when it is at the top of the stack; the rest of the time, it sits and waits for the one above it to return.

So, we start running the program. Obviously, it prints "starting main." At this point, our call stack looks like this:

$\array{ \boxed{\texttt{main()}} }$

Our next line is to call the function f2. So, f2 gets put on the top of the stack:

$\array{ \boxed{\texttt{f2()}} \\ \boxed{\texttt{main()}} }$

We've done enough programming now to know that now f2 is running, not main. The reason, though, is because now f2 is on the top of the call stack, so main just sits and waits until f2 returns.

So, f2 runs, printing "starting f2." It then calls f1, leading to this stack:

$\array{ \boxed{\texttt{f1()}} \\ \boxed{\texttt{f2()}} \\ \boxed{\texttt{main()}} }$

So, f1 runs, and f2 (and main) sits and waits. f1 prints "inside f1," then returns, "popping" off the stack, taking us back to this: $\array{ \boxed{\texttt{f2()}} \\ \boxed{\texttt{main()}} }$

f2, back on the top of the stack, gladly continues where it left off, printing "leaving f2," before returning and popping off the stack itself:

$\array{ \boxed{\texttt{main()}} }$

Back on the top of the stack, main now finishes, printing "leaving main," and returning, ending the program.

OK! So, recursion. Let's consider the standard first-recursion-example of finding a factorial using recursion:

int fac(int n) {
if (n == 0)
return 1;
return n * fac(n-1);
}


If fac(3) is run, it runs through happily, until it gets to "fac(n-1)", and which point, fac(2) is placed on top of the stack. fac(2) begins to run, and fac(3) sits and waits for it to return. fac(1) is then placed on top, and fac(2) and fac(3) sit and wait. Finally, briefly, fac(0) is placed on the stack, just long enough to return 1.

The story so far looks like this:

$\array{ \phantom{\boxed{\texttt{f}}} \\ \phantom{\boxed{\texttt{f}}} \\ \phantom{\boxed{\texttt{f}}} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \phantom{\boxed{\texttt{f}}} \\ \phantom{\boxed{\texttt{f}}} \\ \boxed{\texttt{fac(2)}} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \phantom{\boxed{\texttt{f}}} \\ \boxed{\texttt{fac(1)}} \\ \boxed{\texttt{fac(2)}} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \boxed{\texttt{fac(0)}} \\ \boxed{\texttt{fac(1)}} \\ \boxed{\texttt{fac(2)}} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \texttt{return 1} \\ \boxed{\texttt{fac(1)}} \\ \boxed{\texttt{fac(2)}} \\ \boxed{\texttt{fac(3)}} }$

At this point, fac(1) picks up where it left off, multiplying its n = 1, which is what fac(0), and returning that solution. fac(2) then multiplies that returned value of 1 by its n = 2, and returning the solution. fac(3) then multiplies that 2 by its n = 3, returning a final answer of 6.

The second half of the story looks like this:

$\array{ \texttt{return 1} \\ \boxed{\texttt{fac(1)}} \\ \boxed{\texttt{fac(2)}} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \phantom{\boxed{\texttt{f}}} \\ \texttt{return 1} \\ \boxed{\texttt{fac(2)}} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \phantom{\boxed{\texttt{f}}} \\ \phantom{\boxed{\texttt{f}}} \\ \texttt{return 2} \\ \boxed{\texttt{fac(3)}} } \longrightarrow \array{ \phantom{\boxed{\texttt{f}}} \\ \phantom{\boxed{\texttt{f}}} \\ \phantom{\boxed{\texttt{f}}} \\ \texttt{return 6} \\ }$

1.2 Tips for writing recursive methods

We're going to need a base case (the thing that makes everything stop running; in the above factorial example, it's if n == 0), and at least one recursive case. The base case is usually easier… what's an input that makes this problem really easy? Write that first, then make the recursive case build towards it.

Let's define a Linked Linked List of int's as follows:

public class LinkedList{

private class Node{

private int data;
private Node next;
//... remaining methods
}

//... remaining methods
}


Using that definition, a traversal of the list iteratively would look something like this:

void traverse(){
while (n != null){
//... perform some action on n
n = n.next;
}
}


and here is a similar process, this time using recursion:

void traverse(){
}

void traverse(Node n) {
if (n == null) //Empty next is a good base case
return;

//... perform some action on n
traverse(n.next)
}


If we were to start the traversal at the head, like traverse(head) in the traverse() method, then the two code snippets would perform identical operations. Make sure you understand how this works. Soon, there will be data structures where the iterative version is pages long, and the recursive version is five lines. You want to know and understand this.

As an example of a recursive traversal and action that we can take is a length() method.

public int length(Node cur) {
if (cur == null)
return 0;
return 1+length(cur.next);
}


Another example is to add to a sorted linked list. For example, if we have a linked list holding the integers 1, 2, and 4, after running addInOrder(3), we would have a linked list with the Integers 1, 2, 3, and 4.

public void addInOrder(int element) {
if (tail == null) //first item inserted
}

//This is a helper method, so our user doesn't have to know or care
private Node addInOrder(int element, Node n) {
if (n==null)
return new Node(element);
if (n.data > element) {
Node t = new Node(element);
t.next = n;
return t;
}
return n;
}


Make some examples to try the above on, and understand why it works! Your homework will go better.

3 Big-O with Recursion

3.1 Determining the Big-O of a Recursive Function

We have some practice identifying the Big-O of iterative algorithms. However, now that we're (again) experts on recursion, it's important that we be able to identify the Big-O of recursive algorithms. We're not going to spend a huge amount of time on this; that's Algorithms' job (note for IT people: consider taking Algorithms. It's an Important Class). However, it is important that you can identify some of the most common patterns.

void traverse(Node node) {
if (node == null) //base case
return;

//... perform some action on n

traverse(n.next) //recursive case
}


Obviously, we have two cases; a base case, and a recursive case. We're going to write a function $$T$$ that describes the number of steps taken in each case as a function of the input. The input for this example will be length of the remaining list, if we were to assume that node were the head. The base case can then easily be defined as when $$n=0$$, or the length of the list of nodes following the current node is 0, as would occur when node is null. The base case is easily defined then as:

$T(0) = 2$

This is because when $$n=0$$, there is a constant amount of work to do (namely, do the comparison in the if-statement, and perform the return statement). Ultimately, this constant 2 won't really matter in the big-O, but when we're dealing with recursion it's safer not to do big-O until the end.

Now what about the other case - if $$n > 0$$?

$T(n) = 2 + T(n-1)$

That is, it is two constant steps plus doing the work of performing the work on the next node in the list. This is because when $$n > 0$$, we do some constant amount of work (if statement comparison, function call to traverse, etc), then run it all again on the remainder of the list.

The two of these functions together are known as a recurrence relation. So the recurrence relation of this algorithm is:

\begin{align*} T(0) &= 2 \\ T(n > 0) &= 2 + T(n-1) \end{align*}

The same exact thing can also be written as

$T(n) = \begin{cases} 2,& n=0 \\ 2 + T(n-1), & n > 0 \end{cases}$

3.2 Solving Recurrence Relations

Our five-step process for solving a recurrence relation is:

1. Write down the recurrence relation.
2. Expand the recurrence relation, for several lines.
3. Figure out what you would write for the $$i$$-th line.
4. Figure out what value of $$i$$ would result in the base case.
5. In your equation from (3), replace $$i$$ with the solution from (4).

OK, so we've written it. Let's do step 2.

\begin{align*} T(n) &= 2 + T(n-1)\\ &= 2 + \underbrace{2 + T(n-2))}_{T(n-1)}\\ &= 2 + 2 + \underbrace{2 + T(n-3)}_{T(n-2)}\\ \end{align*}

And so on. But this is probably enough to recognize the pattern. Let's do step 3. Hopefully you agree that on the i-th line, we'd have:

$T(n) = 2i + T(n-i)$

So when does this stop? Well, that's where the other part of our recurrence relation comes in: our base case equation tells us that when $$n=i$$, it's just $$2$$. So, we figure out what value of $$i$$ makes the $$T(n-i)$$ equal to our base case of $$T(0)$$. In this case, that happens when $$i=n$$ because when $$i=n$$ then $$n-i=0$$.

Finally, we do step five, and take our equation $$T(n) = 2i + T(n-i)$$, and replace $$i$$ with our solution of step 4 ($$i=n$$), to get $$T(n) = 2n + T(0)$$. Because $$T(n-n) = T(0)=2$$, the runtime of that equation is

$2n+2$

which is $$O(n)$$.

3.3 A Simpler Example!

Let's do one more, and this time, let's do the simple recursive sum function:

int sum(int n){
if(n==0) return 0;
else return n+sum(n-1);
}


In the base case, we can say one step occurs, since all we are doing is returning 0 and this occurs when $$n=0$$.

$T(0) = 1$

The number of steps in the recursive case (when $$n>0$$) is one sum, one subtraction, and the function call — so three steps plus the number of steps in the recursive call.

$T(n) = 3 + T(n-1)$

We can now write the recursive relation:

$T(n) = \begin{cases} 1,& n=0 \\ 3 + T(n-1), & n > 0 \end{cases}$

Let's now identify the pattern:

\begin{array} TT(n) &= 3 + T(n-1) \\ &= 3 + 3 + T(n-2) \\ &= 3 + 3 + 3 + T(n-3) \\ & \ldots \\ T(n) & =3i + T(n-i) \end{array}

Substituting for $$i$$ the value $$n$$, we get

$3n + T(n-n) = 3n+T(0) = 3n+1$

And clearly, $$3n+1$$ is $$O(n)$$

3.4 A Harder Example!

Before you saw an iterative version of binary search. Here's a slightly different recursive version, but the principle is the same. Let's now use recurance relations to identify the Big-O.

int binarySearch(int[] A, int toFind) {
return binarySearch(A, 0, A.length-1, toFind);
}

boolean binarySearch(int[] A, int left, int right, int toFind) {
if(right-left <= 1){
if(toFind == A[right] && right > 0) return right;
else if(toFind == A[left]) return left;
}

int mid = (left + right) / 2;
if (toFind < A[mid]) return binarySearch(A, left, mid-1, toFind);
else if (toFind > A[mid]) return binarySearch(A, mid+1, right);
else return mid; //it must be equal, so we found it
}


Let's say that $$n$$ is defined as the distances between left and right, so the base case is when $$n \le 1$$. There are two recursive cases, though, but the worst case for both is the same number. So we if we count steps

$T(n) = \begin{cases} 7,& n = 1 \\ 6 + T\left(\tfrac{n}{2}\right),& n > 1 \end{cases}$

(*If you disagree or come up with different values for 6 and 7, remember that those constants do not actually matter at all for the big-O, as long as they're constant! If you were solving this for a HW or exam and put down 3 and 9 instead of 6 and 7, no problem!)

So let's figure out the Big-O. Step 2 is to get the pattern for the $$i$$th step:

\begin{align*} T(n) &= 6 + T\left(\frac{n}{2}\right) \\ &= 6 + \underbrace{6 + T\left(\frac{n}{4}\right)}_{T\left(\frac{n}{2}\right)} \\ &= 6 + 6 + \underbrace{6 + T\left(\frac{n}{6}\right)}_{T\left(\frac{n}{4}\right)} \\ &= 6i + T\left(\frac{n}{2^i}\right)\\ \end{align*}

The next step is to determine how big $$i$$ has to be to hit the base case:

\begin{align*} \frac{n}{2^i} & \le 1 \\ 2^i &\ge n \\ i &\ge \log_2 n \end{align*}

Finally, we substitute our discovered value of $$i$$ into the formula.

$T(n) = 6(\log_2 n) + T\left(\frac{n}{2^{\log_2 n}}\right) \\$

Notice that $$2^{\log_2 n}$$ is $$n$$ by the definition of logorithms, so we can now solve to get the non-recursive total cost:

\begin{align*} T(n) &= 6(\log_2 n) + T\left(\frac{n}{2^{\log_2 n}}\right) \\ &=6\lg n + T(n/n) \\ &= 6\lg n + T(1) \\ & = 6\lg n + 7 \end{align*}

which of course is $$O(\log n)$$ as we expected.

(* From now on, I'm going to write $$\lg n$$ instead of $$\log_2 n$$, because I'm lazy. You should feel free to do the same.)

Hopefully, before the last line, it was clear to you at a minimum that this was sublinear, that is, faster than linear time as we identified before.

Credit: Ric Crabbe, Gavin Taylor and Dan Roche for the original versions of these notes.

Created: 2016-08-31 Wed 07:26

Validate