P1

We might try to improve on the "merge" algorithm presented in class and in the class notes (not the unit 2 notes) by creating a copy of the first sorted range, but keeping the second sorted range in-place in the array A. Here's how we might write that:
Algorithm: merge(A,i,m,j)
Inputs : array A, and indices i ≤ m ≤ j such that
         A[i..m] and A[(m+1)..j] are idividually in sorted order
Outputs: array A is modified so that  A[i..j] contains its original
         elements in sorted order

A1 = new array copy of A[i..m]

k = i, i1 = 0, j1 = m - i, i2 = m+1, j2 = j

while i1 ≤ j1 and i2 ≤ j2 do            ← keep adding the smaller of the initial elements of A1 & A[i2..j2]
  if A[i2] < A1[i1]
    A[k] = A[i2], i2 = i2 + 1, k = k + 1
  else
    A[k] = A1[i1], i1 = i1 + 1, k = k + 1

while i1 ≤ j1 do                        ← add any remaining elements of A1
  A[k] = A1[i1], i1 = i1 + 1, k = k + 1

free A1
      
The advantages are twofold: first, we save half of the copy costs both in time and memory, second, we get to drop the original version's third while loop altogether. On the other hand, it's not clear that we won't run into trouble by writing something into A in the first loop that overwrites an unplaced value from the left-half of the original range, thereby losing or corrupting data. Your job is to a) Prove that

$k + (j_1 - i_1 + 1) = i_2$

is an invariant for both loops in the above algorithm, and b) use it to prove that we never overwrite a value in the range A[i2..j2], so the fear of losing or corrupting data is unfounded.

P2

Analogous to the tree we drew in class for insertion sort on three elements, draw the tree for selection sort on three elements. Please follow the same conventions!

P3

Based on the tree you just drew and the tree for insertion sort, how do the best and worst case number of comparisons for selection sort on three elements compare to the best and worst case number of comparisons for insertion sort on three elements? Describe another insteresting differences between these trees.

P4

Using the description of heapsort from Problem Set 1, what is the best-case running time of heapsort? Note that we already know it is $\Omega(n)$, simply from the observation that any sorting algorithm must look at every input element. So I'm really asking for an upper bound. I'd like the tightest possible upper bound, though, and I'd like a good argument for why the upper bound you give is valid.

P5

Write two neat 5x5 tables, with rows labeled 0..4 and columns labeled 0..4. In table 1, the element at row i column j should be $(i + j) \mod 5$. In table 2, the element at row i column j should be $(i \times j) \mod 5$.