Reading

These notes and Unit 1 section 6-9

Quiz questions

What do we mean by "the running time" of an algorithm?

For most of this course, we're going to be concerned with speed, specifically the speed of algorithms. But what will we mean by that? When we talk about the running time of an algorithm, what should that mean?
  1. Why it doesn't make sense to say "the running time of gallopSearch is 42ms".
  2. Why it doesn't make sense to say "the running time of gallopSearch on an array of n elements is 181*lg(n) + 72 microseconds".
  3. Why specifying concrete worst case / best case / average case running-time functions in some time unit, like nanoseconds, doesn't make sense.
  4. Why it's undesireable to express running times in terms of "abstract machine operations" or "primitive operations" even though it's possible ... because, whoa Nelly is that hard to do!
  5. Why grouping functions into classes by asymptotic growth rate makes sense; or "who cares about constants?"
And so we arrive at the following agreement:
Although it'd be great to know the exact expressions for the worst/best/average case number of abstract machine instructions or primitive operations, that's usually way too difficult for mere mortals to determine. So, we usually settle for descriptions of the asymptotic growth rate of those functions instead.
This is definitely a case of "settling". It's caving into practicality, and acknowledging our shortcomings. But it's the best we can do. And so we say things like "the worst case running time function for gallopSearch grows like lg(n) grows".

Expressing growth rates: $O$, $\Theta$, and $\Omega$

We give the definition of what it means for function $g(n)$ to be "big-O" of function $f(n)$, where $f$ and $g$ are functions that produce positive results for positive inputs.