## selectionSort

SelectionSort is one of the classic sorting algorithms. Here's
a nice simple iterative version of it:

selectionSort(A,n)
input: A an array of n objects for which "<" is defined
n the size of A
Output: side effect of leaving A in sorted order according to "<"
for i from 0 to n-2 do
m = i
for j from i+1 to n-1 do
if A[j] < A[m]
m = j
swap(A[i],A[m])

In class we showed that its worst-case running time is $O(n^2)$,
which ... well, duh. More interestingly, we showed that its
worst-case running time is $\Omega(n)$. Put the two together,
and we arrive at the fact that the worst-case running time for
selectionSort is $\Theta(n)$.

## insertionSort

SelectionSort is another one of the classic sorting algorithms. Here's
a nice simple iterative version of it:

insertionSort(A,n)
input: A an array of n objects for which "<" is defined
n the size of A
Output: side effect of leaving A in sorted order according to "<"
for i from 0 to n-2 do
j = i + 1
while j > 0 and A[j] < A[j-1] do
swap(A[i],A[m])
j = j - 1

In class we showed that its worst-case running time is $O(n^2)$,
which ... well, duh. More interestingly, we showed that its
worst-case running time is $\Omega(n)$. This was more involved
than for selectionSort. Why? Because selectionSort's inner
loop always executes $n - i - 1$ times, no matter what the input
looks like. With insertionSort, however, the inner loop makes
anywhere from $0$ to $i$ iterations, depending on the
particulars of the input, because the "A[j] < A[j-1]"
condition allows us to exit the inner loop early. To do this
analyis, we had to find concrete inputs for which insertionSort
is slow. Someone noticed that for reverse-sorted input, the
inner loop always iterates the maximum number of times (which is
$i$). So we derived $\Omega(n)$.
Once again, put the two facts together,
and we arrive at the fact that the worst-case running time for
insertionSort is $\Theta(n)$.