If your program asks for a random bit of data, the subsequent computation will depend on whether a 1 or a 0 was returned. However, you'll get one or the other, and the program will proceed with a single computation based on that value. In nondeterministic computation you can request a "nondeterministic bit" and, instead of getting one or the other, the computation branches off into two alternate realities, or "alternate computations", which proceed simultaneously - one branch corresponding getting a 1 and the other corresponding to getting a 0. If you have several of these "nondeterministic bits" in your program, you'll have many "alternate reality computations" proceeding simultaneously. That's nondeterministic computing.
Great, but in the end we need one single answer out of these many simultaneous "alternate reality" computations. What'll it be? Well, recall that we've been restricting ourselves to decision problems in our discussions of P and NP. For nondeterministic computation, we need to do the same thing. So, assuming we're trying to solve a decision problem, if each alternate computation comes up with the result "false", then the result of the nondeterministic computation is "false". As long as at least one of the alternate computations comes up with the result "true" the result of the nondeterministic computation is "true".
Real computers, like the one I'm typing on right now, are deterministic not nondeterministic. The nondeterministic model of computing is not supposed to model real computers, they are supposed to help categorize certain kinds of computations. However, there is this idea of a "quantum computer" out there, and quantum computers may be able to compute like our nondeterministic machines. This kind of "both at once" idea is present in quantum mechanics - Schroedinger's Cat, for example - and is the real basis for quantum computing. However, it's not clear that there will ever be general purpose programmable quantum computers. Still, the possiblity is out there ...
So how does this characterization of NP as the class of problems that can be solved in polynomial time with a nondeterministic machine mesh with our definition of NP as the class of decision problems for which we have certificates that can be verified in polynomial time? Well, suppose I have a definition of certificates for instances of Problem A, where certicate-length is polynomial in the bit-length of the problem instance, and a polynomial time algorithm for verifying certificates. Then I could generate a string of nondeterministic bits as long as any certificate, and run the verification algorithm using that sequence of bits as the certificate. Thus, I would effectively be simultaneously verifying all possible certificates. One of them will give me TRUE as an answer if and only if the answer to the original instance is TRUE. How long does this take? Well the verification algorithm is polynomial time and the certificate has length polynomial in the original input, so this gives us an algorithm that runs in polynomial time.
This question of whether there are problems in NP that cannot be solved in polynomial time is the biggest open problem in computer science and one of the biggest in math as well. Truth of the matter is, most of us are pretty convinced that P ≠ NP, even though we can't prove it.
The decision Problem B is reducible to the decision Problem A if there is an algorithm that transforms instance IBof problem B into an instance IA of problem A that is equivalent. In this context, "equivalent" means that the answer to Instance B is true if and only if the answer to Instance A is true.
| Problem A - Set Partition: Given positive integer weights w1, ..., wn, Can the weights be partitioned into two multi-sets (i.e. duplicates allowed) of equal weight? | |||||||||||||||||||||||||||||||
| Problem B - Simple Knapsack: Given positive integer weights w1, ..., wn and capacity C, Does some sub-multiset of the weights sum to exactly C? | |||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||
|
Suppose that a program runs in 1/10 of a second, and the machine it
runs on executes an instruction every 1/1,000,000 of a second. What's
the most memory this program could possibly use?
The most memory would be used if every instruction wrote to a new
memory location. Since instructions reference at most a word at a
time, this would use 1 word = 4 bytes per instruction. In 1/10 of a
second, our machine performs 100,000 instructions, so we use at
most 400,000 bytes, or 400K.
So, you should see how information about runing time
translates into information about space usage. We know
now that Algorithm X requires only polynomial space, so
any output from X will have bit-size that's polynomial in
the input size.
Suppose Problem B is polynomial time reducible to problem A. If there is a polynomial time algorithm for solving A, then we have a polynomial time algorithm for solving B. Here it is:
IB ---------------------------> IA ---------------------------> solution
Polynomial Polynomial
Time Time
Reduction Solution
s = size IB time = O(sk) t = size IA time = O(tm)
= O(sk) = O(O(sk)m) = O(skm)
So we deduce that the time to solve IB is
O(skm), which is polynomial in the size of
IB. Note that we used the fact that the output of
an algorithm that runs in $O(s^k)$ time has bit-length $O(s^k)$.
Input: Simple Knapsack instance w1, w2,
..., wn and capacity C
Output: A Set Partition instance
that's equivalent to
the input Simple Knapsack instance.
Proposition: Given Simple Knapsack instance w1, w2, ..., wn and capacity C, the above algorithm returns an equivalent Set Partition instance.
Proof: Let S = w1 + w2 + ... + wn. First, we'll show that if the Knapsack problem instance is true, so is the set partition problem instance we produce. Suppose that some subset of weights fills the knapsack to capacity C. The sum of all the wieghts not in the knapsack is S - C. If C ≥ S/2 the partition problem returned by the algorithm adds weight wn+1 = 2*C - S, so adding wn+1 to the set of weights not in the knapsack gives gives a set of total weight S - C + 2C - S = C, and we have a partition into two sets of weight C. If C < S/2 the partition problem returned by the algorithm adds weight wn+1 = S - 2*C, so adding wn+1 to the set of weights in the knapsack gives a set of total weight C + S - 2C = S - C, which is equal to the set of weights not in the knapsack. Thus if the original knapsack problem instance can be solved, so can the returned partition problem instance.Next we'll show that if the Set Partition problem instance is true, so is the Knapsack problem instance. Suppose that the n+1 wieghts in the partition problem instance can be partitioned into two sets of equal weight. If C ≥ S/2 the partition problem instance returned by the algorithm adds weight wn+1 = 2*C - S, so the sum of all weights in the partition problem instance is S + 2C - S = 2C. Thus, both partition sets sum to C, and whichever doesn't have the weight wn+1 is a solution to the original knapsack problem instance. If C < S/2 the partition problem returned by the algorithm adds weight wn+1 = S - 2*C, so the sum of all weights in the partition problem instance is S + S - 2*C = 2*(S - C). Thus, both partition sets sum to S - C, and if we remove weight wn+1 from whichever set has it, the remaining weights sum to S - C - (S - 2*C) = C and thus provide a solution to the original knapsack problem instance. Thus if the returned partition problem instance can be solved, so can the original knapsack problem instance.
We can't have one problem instance solvable (i.e. "true") without the other one being solvable as well, so either both are solvable or neither are solvable. Thus the two problem instances are equivalent.
Clearly, our algorithm takes polynomial time. Thus, Simple Knapsack is polynomial time reducible to Set Partition, which means among other things that if we can solve Set Partition in polynomial time we can also solve Simple Knapsack in polynomial time. So there's another way to win your million dollars!