- Why it doesn't make sense to say "the running time of gallopSearch is 42ms".
- Why it doesn't make sense to say "the running time of gallopSearch on an array of n elements is 181*lg(n) + 72 microseconds".
- Why specifying concrete worst case / best case / average case running-time functions in some time unit, like nanoseconds, doesn't make sense.
- Why it's undesireable to express running times in terms of "abstract machine operations" or "primitive operations" even though it's possible ... because, whoa Nelly is that hard to do!
- Why grouping functions into classes by asymptotic growth rate makes sense; or "who cares about constants?"

Definition: Big-O notation

Given two functions \(T(n)\) and \(f(n)\), that always return positive numbers, \(T(n) \in O(f(n))\) if and only if there exist constants \(c, n_0 > 0\) such that, for all \(n\ge n_0\), \(T(n) \le c f(n)\).

Definition: Big-$\Omega$ notation

Given two functions \(T(n)\) and \(f(n)\), that always return positive numbers, \(T(n) \in \Omega;(f(n))\) if and only if there exist constants \(c, n_0 > 0\) such that, for all \(n\ge n_0\), \(T(n) \ge c f(n)\).

Definition: Big-$\Theta$ notation

Given two functions \(T(n)\) and \(f(n)\), that always return positive numbers, \(T(n) \in \Theta;(f(n))\) if and only if both $T(n) \in O(n)$ and $T(n) \in \Omega(n)$.

In other words, we can ignore constants! This shouldn't shock you. This was the whole point of big-O in the first place. The upshot of this is that I never want to see an answer like O(3*n^2). Instead you should just say O(n^2). Learn to ignore those constants. All of this is true for Θ and Ω as well, by the way.Theorem:If T(n) ∈ O(a*f(n)), where a is a constant, then T(n) ∈ O(f(n)).

Proof:Since T(n) ∈ O(a*f(n)), then there are constants n0 and c such that for all n > n0, T(n) < c*(a*f(n)). Now, let c' = c*a. Then it's true that for all n > n0, T(n) < c'*f(n). Thus, we've satisfied the definition of T(n) being O(f(n)).

double xm, ym; xm = (x1 + x2)/2; ym = (y1 + y2)/2;What's the running time for this? Well, certainly each statement takes constant time.

double xm, ym; <----- O(1) xm = (x1 + x2)/2; <----- O(1) ym = (y1 + y2)/2; <----- O(1) +______________________________________________________ O(1) + O(1) + O(1)So, can we say anything simpler than this? Well, here's another rule of O's etc for you: If T(n) is O(f(n)) + O(g(n)), then T(n) is also O(f(n) + g(n)). This is true for Θ and Ω as well. So, from our original problem, O(1) + O(1) + O(1) ∈ O(3). However, we know that constant multiples can be thrown out, O(3) ∈ O(1). Our algorithm for midpoints runs in constant time.

int sum = 0, i = 1, B; B = n*n; while(i < B) { sum = sum + i; i++; }Let's try to add up costs again:

int sum = 0, i = 1, B; <----- O(1) B = n*n; _ <----------- O(1) while(i < B) \ { | sum = sum + i; <------------ B * "cost of 1 loop iteration" i++; | = B*(O(1) + O(1) + O(1)) } _/Now, looking at the program, you see that B is in fact just n^2, so simplifying our answer we get:

T(n) ∈ O(1) + O(1) + n^2*(O(1) + O(1) + O(1)) ∈ O(2) + n^2*O(3) ∈ O(1) + n^2*O(1)To simplify this further, we need two more rules:

T(n) ∈ O(1) + n^2*O(1) ∈ O(1) + O(n^2*1) ∈ O(1) + O(n^2)

T(n) ∈ O(1) + O(n^2) ∈ O(n^2)

ConstantMultiple Rule |
If T(n) ∈ O(a f(n)) then T(n) ∈ O(f(n)) If T(n) ∈ Ω(a f(n)) then T(n) ∈ Ω(f(n)) If T(n) ∈ Θ(a f(n)) then T(n) ∈ Θ(f(n)) |
We never say something like Θ(3), we say Θ(1). We never say something like O(2n^2), we say O(n^2). |

SumRule |
If T(n) ∈ O(f(n)) + O(g(n)) then T(n) ∈ O(f(n) + g(n)) If T(n) ∈ Ω(f(n)) + Ω(g(n)) then T(n) ∈ Ω(f(n) + g(n)) If T(n) ∈ Θ(f(n)) + Θ(g(n)) then T(n) ∈ Θ(f(n) + g(n)) |
We can simplify something like "T(n) ∈ Θ(n) + Θ(n^2)" to "Θ(n + n^2)", which we'll simplify further by a later rule. |

ProductRule |
If T(n) ∈ O(f(n)) * O(g(n)) then T(n) ∈ O(f(n) * g(n)) If T(n) ∈ Ω(f(n)) * Ω(g(n)) then T(n) ∈ Ω(f(n) * g(n)) If T(n) ∈ Θ(f(n)) * Θ(g(n)) then T(n) ∈ Θ(f(n) * g(n)) |
We can simplify something like "T(n) = Θ(n)*Θ(lg n)" to "Θ(n lg n)", or we can simplify something like "n*Ω(sqrt(n))" by noting that n=Ω(n) so we can rewrite n*Ω(sqrt(n)) as Ω(n)*Ω(sqrt(n)) which our rule simplifies to Ω(n*sqrt(n)). |

DominantTerm Rule |
If T(n) ∈ O(f(n) + g(n)) and g(n) ∈ O(f(n)) then T(n) ∈ O(f(n)) If T(n) ∈ Ω(f(n) + g(n)) and g(n) ∈ O(f(n)) then T(n) ∈ Ω(f(n)) If T(n) ∈ Θ;(f(n) + g(n)) and g(n) ∈ O(f(n)) then T(n) ∈ Θ(f(n)) |
The condition "g(n) ∈ O(f(n))" is like saying "f(n) is the dominant term". This rule allows us to simplify Θ(n + n^2) to Θ(n^2), since n=O(n^2). |