Reading

Unit 5 Notes Sections 1.1 - 1.3.

Homework

Homework

Generic Search

We went over the generic "search" in graph, where we are given a starting vertex $s$ and we search out all vertices reachable from $s$. The "parent" stuff is optional, but it's important! This search is "generic" because we don't specify how the "store" actually works. Remember: during the algorithm nodes are either In the algorithm itself, nodes are either marked "visited" or not marked "visited". But to understand the process, you should keep the three categories above in mind.
search(graph G, vertex s)
-------------------------
initialize "store" S
add s to S
mark s as visited [set parent[s] = null]
while S is not empty do
  u = remove next vertex from S
  for each child v of u do
    if v not marked visited
       mark v as visited [set parent[v] = u]
       add v to S
      

Depth first search

If the store from generic search is a stack, we get what's called "depth-first search".
search(graph G, vertex s)
-------------------------
initialize stack S
S.push(s)
mark s as visited [set parent[s] = null]
while S is not empty do
  u = S.pop()
  for each child v of u do
    if v not marked visited
       mark v as visited [set parent[v] = u]
       S.push(v)
      

Breadth first search

If the store from generic search is a queue, we get what's called "breadth-first search".
search(graph G, vertex s)
-------------------------
initialize queue Q
Q.enqueue(s)
mark s as visited [set parent[s] = null]
while Q is not empty do
  u = Q.dequeue()
  for each child v of u do
    if v not marked visited
       mark v as visited [set parent[v] = u]
       Q.enqueue(v)
      

Dijkstra search

Of course we can use other structures as a "store" as well. For example, if we use a priority queue with the "priority" calculated in the right way, we actually get Dijkstra's algorithm for finding shortest paths in weighted graphs.
search(graph G, vertex s)
-------------------------
initialize priority queue Q based on vertices "dist"
Q.enqueue(s) ← first setting dist[s] = 0
mark s as visited [set parent[s] = null]
while Q is not empty do
  u = Q.dequeue() ← i.e. dequeue vertex with minimum "dist" on queue
  for each child v of u do
    if v not marked visited
       mark v as visited [set parent[v] = u] ← set dist[v] = dist[u] + weightOfEdge(u,v)
       Q.enqueue(v)
    else
       dnew = dist[u] + weightOfEdge(u,v)
       if dnew < dist[v]
         set dist[v] = dnew [set parent[v] = u]
         NOTE: Q may need to be updated, depending on how you set things up
      
Of course the real magic is how to provide the priority queue functionality needed by Dijkstra's algorithm efficiently! Notice that we don't just need to get the minimum weighted node. We alo need to update weights of neighbors of the node we just dequeued. Supporting that efficiently is the key to making Dijkstra's algorithm efficient.

Search Tree

Each node gets a unique "parent" in this search, and the edges from parent to child are, somehow, the important. They are the edges we really used in order to discover vertices. The other edges turned out not to help in finding anything new. When we have a set of vertices, all reachable from a common ancestor, and each having a unique "parent", what we have is ... a tree!

As an exercise in class, we broke into groups and took the same graph and tried the Queue, Stack and Dijkstra/Priority-queue versions of search as above. What I asked you to do was to draw the graph with the edges from the "parent" relation highlighted. The highlighted edges formed (of course) a tree. This tree is called "the search tree" for the graph. So even though we have an arbitrary graph, the search algorithm implicitly follows a subgraph of edges that form a tree. This tree is interesting!