These notes and Unit 5 sections 7.

Quiz questions

NONE. Get the Project 3 software up and running.

Okay, so I want to solve an NP-Hard problem

A problem is called NP-Hard if it is at least as difficult as one of the NP-complete problems. Typically, we talk about optimization problems whose decision-problem variant is NP-Complete. So, for example, finding the minimum sized hitting set, determining how to fill up the knapsack to get the maximum value, finding longest paths, or hamiltonian cycles, or smallest vertex covers ... thes are all NP-Hard problems and, unless P = NP and we've all been too stupid to find good algorithms for any of these problems, we can't come up with algorithms that are guaranteed to find optimum solutions quickly. So what do we do if we are faced with one of these problems and have to solve it?

If you need to deal with NP-Hard problems (and you don't feel up to the task of proving that P = NP) you have two basic options:

  1. get an optimum solution and hope you get it quickly enough in practice, or
  2. get a solution quickly and hope it's not too far from the optimum in practice,
... where "in practice" means on the particular input problems you face. What you can't get is solutions that are guaruanteed optimum and guaruanteed to be found quickly. Option 1 might lead you to use what are called "branch & bound" algorithms, while Option 2 might lead you to use greedy algorithms (and hope for the best), randomized greedy algorithms run multiple times, greedy algorithms + "local optimization", genetic algorithms, simulated annealing or any number of other schemes that are well-suited for the particular problem you want to solve. There are some general techniques and approaches, but much of what you need to do depends on your particular problem and the kind of instances of that problem your application is likely to encounter. I should point out that there is a particular approach to Option 2 that has a special place in the study of algorithms, and that's what are called "Approximation Algorithms".
Approximation Algorithms come with some kind of guarauntee about how close (or far!) from the optimum solution their result will be. Some problems are harder to approximate than others, in the sense that for some problems no polynomial time algorithm gives even a "pretty good" solution ... unless, of course, P = NP. The book includes a section on approximation algorithms. While they are certainly an important subject in theoretical computer science, my opinion is that they are not so important for people who want to solve NP-Hard problems in practice. So I'm going to look at other things.

We're going to first look at techniques for Option 2, and then we'll finish up the course with Option 1.

Greedy algorithms for "pretty good" solutions

The first and most obvious approach to getting "pretty good" solutions to NP-Hard problems is to devise greedy algorithms. There are usually many possible greedy choices, some of which, like Dijkstra's algorithm, can actually be quite sophisticated. Sometimes you can prove something about how well a particular greedy strategy works, or you can run experiments to see if it performs acceptably.
Example: Consider the travelling salesman problem (TSP) on an undirected, complete graph with positive edge weights. We might choose a starting vertex, v0, call it the "current" vertex, and construct a cycle by continually choosing the vertex from the remaining unchoses vertices that is closest to the "current" vertex, and calling that the new "current" vertex. A totally different greedy algorithm is to keep a set S of edges that will untimately form our cycle, and to continually choose the smallest "good" edge to add to S, where an edge (u,v) is "good" as long as the # edges in S involving u + # edges in S involving v ≤ 1. When no "good" edges remain, the elements of S form a path, and we simply add to S the unique edge that completes our cycle.

Pretty clearly the above example proves that nearest neighbor doesn't always give optimal solutions. Can you find a counter-example for the smallest-edge greedy algorithm?

Randomization and greedy algorithms: Greedy algorithms make choices, of course, but the choices are guided by "heuristics" or rules of thumb. We're not always sure they are the right choices. We can introduce some randomness in how we make our choices and run the resulting greedy algorithm several times, taking the best of the results as our "solution".

Example: Consider the Nearest Neighbor strategy for TSP, which was our first greedy idea above. We could simply choose randomly amongst all neighbors of the "current" vertex that are not already in the cycle. We could incorporate a little randomness into Nearest Neighbor by taking nearest neighor with probability p and choosing randomly from the remaining neighbors with probabilty 1-p, where p is adjustable. Or we could get really tricky and have the probability of choosing neighbor vi be a function of the distance of vi from the "current" vertex. In all cases, if we run the algorithm k times we will very likely have k different answers, and we simply take the best.

Local refinement or local optimization of solutions: Another important idea is to take a solution, usually from a greedy algorithm, and look for ways to improve it. These improvements are usually called "local" refinements, and understanding why means thinking about what we're doing in a different way. We'll talk about that in a little bit. What local refinements you use will be very problem specific, and perhaps even specific to the kind of problem instances your application produces. We'll look at a very well-known local refinement operation for TSP called 2-OPT. The basic idea is simple: You start with a cycle

v0, v1, ..., vn-1. v0
and you create a new cycle by deleting edges (vi-1,vi) and (vj,vj+1) and adding edges (vi-1,vj) and (vi,vj+1). Equivalently, you can view this as simply reversing the sub-path vi, ..., vj.

How we use a local refinement operation like 2-OPT is up to us. We could simply try all possible 2-OPTs (which means trying all possible i,j combinations, which means O(n2) trials) and actually use only the i,j combination that improves our solution the most. If no i,j combination improves our solution, we have reached a local optimum, which means that no single 2-OPT operation yields a better solution. We might not have found the global optimum, i.e. the best possible cycle, but to get to the/a global optimum would require a sequence of 2-OPT operations in which the intermediate cycles we produce would be worse before they started to get better. Can you prove that given two edges in a cycle there is only one 2-OPT move on those edges that retains a cylcle? Is there always a sequence of 2-OPT moves that takes a given cycle and transforms it into an optimal TSP solution?

An interesting problem is to find a 2-OPT move (or sequence of 2-OPT moves) to fix the sub-optimal TSP solution produced by the nearest neighbor greedy algorithm on the above example.

Do it all --- combine randomized greedy and local refinement: Finally, we could try combining all our ideas so far: we generate many candidate solutions with a randomized greedy algorithm, use local refinement on until we arrive at a local minimum on each candidate solution, and return the best result we find as our solution. Filling in the gaps in this general approach is tough, though. You need to decide on a randomized greedy algorithm, decide on one or more local refinement operations, and you have to put a lot of thought into implementing these things as efficiently as possible: The faster your algorithms run, the more candidate solutions you can generate and refine, the better your best solution is likely to be.