What do we mean by "the running time" of an algorithm?
For most of this course, we're going to be concerned with speed,
specifically the speed of algorithms. But what will we mean by
that? When we talk about the running time of an algorithm, what
should that mean?

Why it doesn't make sense to say "the running time of
gallopSearch is 42ms".
 Why it doesn't make sense to say "the running time of
gallopSearch on an array of n elements is 181*lg(n) + 72
microseconds".

Why specifying concrete worst case / best case / average case
runningtime functions in some time unit, like nanoseconds,
doesn't make sense.

Why it's undesireable to express running times in terms of
"abstract machine operations" or "primitive operations" even
though it's possible ... because, whoa Nelly is that hard to
do!

Why grouping functions into classes by asymptotic growth
rate makes sense; or "who cares about constants?"
And so we arrive at the following agreement:
Although it'd be
great to know the exact expressions for the worst/best/average
case number of abstract machine instructions or primitive
operations, that's usually way too difficult for mere mortals to
determine. So, we usually settle for descriptions of the
asymptotic growth rate of those functions instead.
This is definitely a case of "settling". It's caving into
practicality, and acknowledging our shortcomings.
But it's the best we can do. And so we say things like "the
worst case running time function for gallopSearch grows like
lg(n) grows".