Stacks

Our second ADT is the Stack. We're pretty familiar with Stacks already, so this might not be so hard. Stacks work like a stack of plates: you only add plates onto the top of the stack, and you only remove stacks from the top of the stack. This makes them a LIFO (last-in-first-out).

The methods of a stack are:

Let's consider an example. On the left column of this table, I will have some commands. In the middle, I'll put what is returned, if anything. On the right, I'll put the state of the stack, with the top of the stack on the left.

CommandReturned ValueStack
push(3)-3
push(5)-5 3
pop()53
peek()33
push(4)-4 3
peek()44 3
pop()43
pop()3

Implementation with Different Data Structures

Implementation with Linked Lists should seem relatively easy to you; if "first" refers to the top element of the stack, then push() is just an add-to-front, pop() is a remove-from-front, and peek is just return first.getData(); (modulo error checking for an empty stack, of course).

This means that for stacks implemented by linked lists, all three methods can be run in O(1) time.

What about implementation with arrays? It may seem like push() and pop() would both be O(n), because the array has to be resized every time. However, thanks to something called amortized analysis, this isn't so.

Let's use the scheme that every time we run out of room on our array, we double the size of the array. Will we have lots of unused indices? Yup. But, if our biggest number of elements in the stack is n, the most memory we'll use is 2n. And, while that might hurt, at least that's still O(n). (Yup! We can (and do) use Big-O in reference to memory, too!)

So, OK, we only resize every time we actually fill this too-big array. Isn't the worst case still O(n)? Well, sort of. For the first time, our "number of operations as a function of n" function isn't smoothly increasing; instead, it's spiky. This is because almost all of the time, a push is O(1). However, every now and then, it's O(n).

Pretend you have a business. Every 10 years, you have to buy a $100,000 machine. You have two ways you could consider this; either, every ten years you lose massively, having to pay $100,000, or every year you put aside $10,000 to be used later. This is called amortizing the cost. We can do the same thing here.

Let's assume assigning one element to an array is 1 operation. Again, if we double the size of the array every time we need more space, starting with an array of size one, the following occurs:

Size of arrayCost for each push
11 2 (First we assign one thing to the open array. Then, we assign two things to the new array of size 2)
23 0 (Amortize the above; 1 and 2 is the same as 3 on the first push)
23 0 3 (The third push requires moving three things to a new array of size 4)
43 3 0 (Amortize)
43 3 0 1 (Simple push)
43 3 0 1 5 (Move five elements to new array of size 8)
83 3 3 3 0 (Amortize)
83 3 3 3 0 1 1 1 9 (3 more simple pushes, and copy 9 things to array of size 16)
163 3 3 3 3 3 3 3 0 (Amortize)
163 3 3 3 3 3 3 3 0 1 1 1 1 1 1 1 17 (7 push, and move 17 things to new array of size 32)
323 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 0 (Amortize)

Noticing the theme? No matter how big our stack becomes, the amortized cost of a single push never gets bigger than 3. So, the cost of a push, even with resizing the array, is O(1).

Linked ListArray
push()O(1)O(1)
pop()O(1)O(1)
peek()O(1)O(1)

Usefulness of Stacks

OK, so stacks are fast! What can we do with them?

  1. Reverse stuff - pushing everything on, and then popping it all off puts it in reverse order.
  2. Maze solving - every time you come to a junction, push it on the stack. When your decision doesn't work out, pop to the last junction, and go the other way. If there are no more possible directions, pop again.
  3. For more complex uses of stacks, stay tuned...