Now we know Big-O
We're a week in, and you've already learned the most important part of this class: Big-O notation. You absolutely have to understand this to be a computer scientist. Here are some examples of Big-O notation appearing in academic papers by your faculty:
"Consider the problem of computing the product of two degree-\(n\) polynomials or integers whose length is \(n\) machine words. Significant algorithmic progress has been made in improving the time complexity from \(O(n^2)\) for the classical algorithm. The two-way divide and conquer algorithm which we refer to as Karatsuba's ... has complexity \(O(n^{1.59})\). This was later generalized to a \(k\)-way divide and conquer scheme by Toom and Cook; in particular, when \(k=3\), an \(O(n^{1.47})\) algorithm is produced." -- Daniel Roche, "Space- and Time-Efficient Polynomial Multiplication."
"Unfortunately, the expressiveness of the model must be weighed against the \(O(n^3)\) cost of inverting the kernel matrix." -- Gavin Taylor, "Feature Selection for Value Function Approximation."
"Since the program doubles at each step, the size of \(\Psi_n\) is \(O(2^n)\)." -- Chris Brown and James Davenport, "The Complexity of Quantifier Elimination and Cylindrical Algebraic Decomposition."
"Our P-Ring router, called Hierarchical Ring (HR), is highly fault tolerant, and a rounder of order \(d\) provides guaranteed \(O(log_d P + m)\) range search performance in a stable system with P peers, where m is the number of peers with items in the query range." -- Adina Crainiceanu, et al., "Load Balancing and Range Queries in P2P Systems Using P-Ring."
Other Big-Stuff
Through this entire section, we are still talking about worst-case run times.
Remember, if \(f(n)\in O(g(n))\), then we can come up with a \(c_1\) such that for all \(n\) greater than some \(n_0\), \(f(n)\leq c_1 g(n)\). If we take that expression, and replace "greater than" with "less than," then \(f(n)\in \Omega (g(n))\).
As an example, we know the function \(3n+1 \in O(n)\), because we could set \(c=4\), and \(4n\) dominates \(3n+1\) for all \(n>1\). However, additionally, \(3n+1 \in\Omega(n)\), because we can set \(c=3\), and \(3n\) is dominated by \(3n+1\) for all values of \(n\).
When \(f(n)\in O(g(n))\), and \(f(n)\in\Omega(g(n))\), then we can say \(f(n)\in\Theta(g(n))\). Obviously, big-theta is most precise. In this class, we'll mostly talk big-O. If you take algorithms, that will mostly be big-theta.
Our Main Big-O Buckets and Terminology
Big-O is extremely useful because it allows us to put algorithms into "buckets," where everything in that bucket requires about the same amount of work. It's important that we use the correct terminology for these buckets, so we know what we're talking about. In increasing order...
- Constant time. The run time of these algorithms does not depend on the input size at all. \(O(1)\).
- Logarithmic time. \(O(log(n))\). Remember, changing the base of the log is exactly the same as multiplying by a coefficient, so we don't tend to care too much.
- Linear time. \(O(n)\).
- n-log-n time. \(O(n*log(n))\).
- Quadratic time. \(O(n^2)\).
- Cubic time. \(O(n^3)\).
- Polynomial time. \(O(n^k)\), for any k.
- Exponential time. \(O(b^n)\), for any b. Note this is different than polynomial.
Fuzzily speaking, things that are polynomial time or faster, are called "fast," while things that are exponential, are "slow." There is a class of problems that are believed to be slow called the NP-complete problems. If you succeed in either (a) finding a fast algorithm to compute an NP-complete problem, or (b) prove that no such algorithm exists, congratulations! You've just solved a Millenium Problem, won a Turing Award, and get to have a full professorship, with tenure, at a university of your choice.
The following table compares each of these buckets with each other, for increasing values of \(n\).
| \(n\) | \(\log(n)\) | \(n\) | \(n\log(n)\) | \(n^2\) | \(n^3\) | \(2^n\) |
| 4 | 2 | 4 | 8 | 16 | 64 | 16 |
| 8 | 3 | 8 | 24 | 64 | 512 | 256 |
| 16 | 4 | 16 | 64 | 256 | 4096 | 65536 |
| 32 | 5 | 32 | 160 | 1024 | 32768 | 4294967296 |
| 64 | 6 | 64 | 384 | 4096 | 262144 | 10^19 |
| 128 | 7 | 128 | 896 | 16384 | 2097152 | 10^38 |
| 256 | 8 | 256 | 2048 | 65536 | 16777216 | 10^77 |
| 512 | 9 | 512 | 4608 | 262144 | 134217728 | 10^154 |
| 1024 | 10 | 1024 | 10240 | 1048576 | 1073741824 | 10^308 |
To understand just how painful exponential time algorithms are, note that the universe is about 4.32*10^17 seconds old, and it is estimated that there are 10^80 atoms in the universe.