Unit 13: P vs NP

This unit will be a very short overview of one of the most important unsolved problems in theoretical computing: Are there any problems that cannot be solved faster than a brute-force search through every possible answer? In other words (as we will soon learn), does P equal NP?

If you found an answer to this question, you would win a million dollar prize, and either solidify or destroy the security assumptions for the cryptographic algorithms we just learned. Ready?

Beginning of Class 40

1 Problems and Algorithms

This short unit is really about hardness: the inherent difficulty of certain computational problems. We already know how to analyze the big-O running time of an algorithm and say whether it's fast or slow. But the task now is to talk about situations where every possible algorithm would be slow.

To be more precise, we'll say a problem is some well-defined transformation from acceptable inputs to correct outputs. For example, SORTING is a problem: the input is a list of things that can be compared to each other, and the output is the same things, in least-to-greatest order. We didn't say exactly what steps are taken to do the ordering, just what the output is supposed to be. In other words, a problem specifies the "what", but not the "how".

A algorithm is a specific series of steps that always correctly solves some particular problem. For example, MergeSort is an algorithm that correctly solves the SORTING problem.

In general, there might be many algorithms that correctly solve a given problem, and some might be faster than others. Back to our example of SORTING, we know the MergeSort algorithm has running time \(O(n\log n)\) and the SelectionSort algorithm has running time \(O(n^2)\). There are probably other correct SORTING algorithms that have other running times, so how can we talk about the running time of the SORTING problem itself?

For this, we define the inherent difficulty of a problem to be the fastest possible running time of any correct algorithm for that problem. For SORTING, we know the inherent difficulty is exactly \(O(n\log n)\) because we have an algorithm that exactly matches that running time, as well as a proof from class that any comparison-based sorting algorithm must take at least \(O(n\log n)\) time.

2 Reductions

For most problems, the situation is not so simple, and we can't say for sure what exactly is the inherent difficulty of the problem. This means that we don't know for sure whether we have the fastest algorithm yet.

Fortunately, we can still make comparisons between the inherent difficulty or problems, even if we don't know exactly what that inherent difficulty is, using the concept of a reduction.

A reduction is a way of solving one problem, using any algorithm for another problem. For example, we saw in the previous unit that any public-key cryptosystem can be used to solve the key exchange problem. So we would say that key exchange reduces to public-key cryptography. This means that the inherent difficulty of public-key cryptography is at least as much as that of key exchange.

Another example of a reduction would be between SORTING and MIN, where the MIN problem is defined as finding the smallest number out of a given list of \(n\) numbers. If you have any SORTING algorithm, you can easily solve the MIN problem by first putting the numbers in increasing order, and then just returning the first one. Naturally this is a pretty stupid way to solve MIN, but nonetheless it proves that the SORTING problem is at least as difficult as the MIN problem. Or to put it another way, the fastest algorithm for SORTING can't be faster than the fastest algorithm for MIN.

3 Some problems

Here are some problems that will be used to frame the rest of the discussion. Some of these should be familiar, some a little less so. These aren't necessarily the most important computational problems in the world, just some useful examples for the discussion.

  • MIN: Finding the smallest out of a list of numbers.
  • SORTING: Putting a list of numbers in least-to-greatest order.
  • SHORTPATH: Find the shortest path between two points in a given graph.
  • LONGPATH: Find the longest (non-repeating) path between two points in a given graph.
  • DISCRETE-LOG: Given integers \(n\), \(x\), and \(y\), find a positive integer \(a\) such that \(x^a \bmod n = y\).
  • FACTORIZATION: Given integer \(n\), find two positive integers \(p, q\), both greater than or equal to 2, such that \(n=pq\).
  • SUBSET-SUM: Given a list of positive and negative numbers, find a subset of the list that adds up to exactly 0.
  • COLORING: Given a Map that's split up into different regions or states, using the smallest number of different colors, assign a color to each region so that any regions that touch have different colors.
  • PROG-EQUAL: Given the source code for two computer programs, determine whether they have exactly the same behaviour (the same output on any possible input).
  • If you had to order these problems from easiest to hardest, what would you do?

    4 P

    Classifying problems by difficulty is exactly what the study of theoretical computer science is all about. The idea is to put problems into different groups, called complexity classes, and then make comparisons between the groups.

    P is the first complexity class we will look at, and it has a pretty simple definition: P is the set of all problems that can be solved in polynomial-time.

    Wait, what's polynomial-time? A polynomial-time algorithm has running time \(O(n^k)\), where \(n\) as usual is the size of the input, and \(k\) is any constant integer. So for example \(O(n)\), \(O(n^2)\), even \(O(\sqrt{n})\) and \(O(n^{200})\) are all considered polynomial-time, but \(O(2^n)\) is not.

    Of the example problems above, we know that MIN, SORTING, and SHORTPATH are all members of the complexity class P, because we know polynomial-time algorithms for all three of them. (Remember that Dijkstra's algorithm solves SHORTPATH.)

    The real question is, which of the problems above are not in P, and what would that mean? Take FACTORIZATION for example. As we discussed in the last unit, there is no known polynomial-time algorithm to factor large integers. But that doesn't necessarily mean FACTORIZATION isn't in P, because maybe there is some fast algorithm for FACTORIZATION that we just haven't found yet. So we know for sure that SORTING is in P, but we don't know whether FACTORIZATION is in P or not.

    5 NP

    The second complexity class we'll talk about is NP, which has a somewhat more subtle definition than P. The most important thing to remember first is a common point of confusion: NP does not mean "Not P". Actually NP stands for "Nondeterministic polynomial-time", but that title won't mean much to you unless you've taken a computer science theory class. But don't worry, we can still say what NP is in a way that makes sense.

    A problem is in NP if you can check the validity of a given answer to the problem in polynomial-time. Notice that this doesn't say you can actually get the answer in polynomial-time necessarily, but just that any given answer can be quickly checked.

    Let's take the COLORING problem for example: Given a map, use the smallest number of colors to assign different colors to regions that share a common border. I have no idea how to quickly come up with a coloring scheme for a given map. But if you're given a map with regional boundaries and colors on each region, you can just look through each region and check that all the bordering regions have different colors. So you can easily check the validity of an answer; that means COLORING is in NP.

    FACTORIZATION is another example of an NP problem. Given an integer \(n\), and a proposed pair of factors \(p\) and \(q\), we can easily multiply \(pq\) and check it's the same number as \(n\). We don't have any idea how to find \(p\) and \(q\) quickly, but if someone claims they've found them, we can check the result.

    Notice that every problem that is in P is definitely in NP also. Why? Well, if you can compute the answer in polynomial-time, checking someone else's answer can be done by performing the computation yourself, and checking it's the same as the proposed answer. Like for MIN, say you are given a list of numbers \(L\) and a proposed smallest number \(m\). In a simple \(O(n)\)-time loop, you can find the smallest number yourself, and check it's the same as \(m\). So MIN (and every other P-problem) is also a member of the set NP.

    At this point it might seem like every problem is a member of NP, and indeed many problems are. But not all of them! Take PROG-EQUAL, the task of determining whether two computer programs always do the same thing. Even if someone tells you the answer that yes, they are the same, there's no way to quickly check this answer is correct. You couldn't even try all possible inputs and compare the outputs, because there are an infinite number of possible computer program inputs; your task would never be done! So PROG-EQUAL is an example of a problem which is not in P, and not in NP either. (And we won't talk about it anymore!)

    6 Brute-force search

    NP problems share an important property: they can always be solved by doing a brute-force, exponential-time search. This is a consequence of the definition of NP itself, that you can quickly check a potential answer to see if it's valid. That means that if you want to find the correct answer, you just go through every possible answer, check if each one is valid, and return the optimal answer.

    For example, one way of solving SHORTPATH would be to generate every possible path in the graph, then check if each one is an actual, valid path from point a to point b, and return whichever valid path is shortest. In the case of SHORTPATH this is a stupid algorithm, since we know a much better way of doing the same thing (Dijkstra's). But for LONGPATH, this is actually a pretty good strategy for solving the problem; we don't know any other significantly faster way to do it!

    Another example is DISCRETE-LOG. Given \(n, x, y\), we could try every possible exponent \(a=1, 2, 3, 4, \ldots\) and compute every \(x^a\bmod n\), until we find one where \(x^a\bmod n = y\), and then we've found the answer.

    Another way to say this is that NP is the class of all problems that can be solved by guess-and-check. It doesn't mean that it's going to be fast to solve them by guess-and-check — indeed, that will take exponential-time — but it is a possible way to solve these problems.

    7 The million-dollar question

    To summarize so far: we know some problems are definitely in P, like SORTING and MIN. And all these problems are definitely in NP also. We also know some problems are definitely in NP, like FACTORIZATION and LONGPATH. These problems might be in P but we don't know yet, because no one has come up with any polynomial-time algorithms to factor integers or find the longest path in a graph.

    In fact, even though the concept of NP was introduced in 1971, since that time there hasn't been a single problem which is proven to be in NP but not in P. And of course, we also don't know for sure that every NP problem can be solved in polynomial-time somehow. So while it seems obvious that the sets P and NP are not exactly the same set, we don't have a proof either way whether or not P equals NP.

    If you could prove just one NP problem is not solvable in polynomial-time, that would prove that the sets P and NP are not equal. This would be the most famous result in the history of computing, and would earn you a million dollar prize from the Clay Mathematics Institute.

    On the other hand, if you could prove that every NP problem can in fact be solved in polynomial time, that would prove that P=NP. This would solve the same open problem in mathematics, but would have much more dire consequences (potentially), since it would prove that the number-theoretic problems like FACTORIZATION and DISCRETE-LOG that we've based all modern cryptography on, are not so hard to solve as we have assumed. In other words, you would break the Internet.

    8 NP-Complete Problems

    The biggest progress towards solving the P vs NP question has been in classifying some problems as NP-Complete. Formally speaking, a problem A is NP-Complete if A is a member of NP, and every other NP problem can be reduced to A. In other words, solving a single NP-complete problem in polynomial time would give you a polynomial-time solution to every other NP problem.

    It's kind of surprising that any NP-Complete problem actually exists. But many of them do! Out of the list above, LONGPATH, COLORING, and SUBSET-SUM have all been proven to be NP-Complete. That means, for example, any polynomial-time algorithm to find the longest path in a graph, can (somehow) be used to factor integers quickly. You can find a longer list of NP-complete problems on Wikipedia.

    One of the interesting things here is that the two most important hard problems for cryptography that we've seen, FACTORIZATION and DISCRETE-LOG, are not known to be NP-complete. Most of the computing world seems to believe that these are harder than the problems in P, but not quite as hard as the NP-complete ones. Remember, if someone had a fast algorithm to solve these problems, they would be able to quickly crack Diffie-Hellman or RSA encryption. It would be really nice to have a proof that cracking these cryptographic protocols is hard or "practically impossible", but no one has been able to prove that yet. Maybe you will?