checkTrail(G,n,L,k)
------------------
Input: directed graph G on n vertices {0,1,...,n-1}, array L of k vertices
Output: true if L is a "trail", i.e. in going from each vertex to the
next in L, we only follow valid edges.
check = true, i = 1, curr = L[0]
while i < k and check
next = L[i]
check = check if there's an edge from curr to next
curr = next
i = i + 1
return check
checkPath(G,n,L,k)
------------------
Input: directed graph G on n vertices {0,1,...,n-1}, array L of k vertices
NOTE: assume G is given as an adjacency list
NOTE: assume k ≤ n, since we can't have a path otherwise
Output: true if L is a "trail", i.e. in going from each vertex to the
next in L, we only follow valid edges.
A = an array of n bool's ← to remember which vertices have been visited
initialize elt's of A to false
check = true, i = 1, curr = L[0], A[curr] = true
while i < k and check
next = L[i]
if (A[next] = true) return false else A[next] = true
check = (linearSearch(G[curr],0,G[curr].length-1,next) ≠ NOT_FOUND)
curr = next
i = i + 1
return check
One analysis for worst-case time in terms of $n$ only goes like this:
Creating and initializing the array $A$ takes $O(n)$ time. We
loop $O(k)$ times, but since $k \leq n$, we can make that $O(n)$
times. The linear search is $O(n)$ since no vertex has more
than $n$ neighbors. So I arrive at $T_w(n) \in O(n^2)$.
curr twice, which means that we can never
do worse than calling linear search exactly once for each
vertex. This means the amount of work is the sum of the
neighbors list length over all vertices. But this is exactly
$e$, the number of edges! Thus, we get $O(n)$ for the
intializing $A$ and for the worst-case number of times through
the while loop, and we separately get $O(e)$ for the combined
cost of all the linear searches. This gives us a bound of
$O(n + e)$, which we can't simplify further, because $e$ could
be greater or less than $n$. But, $O(n + e)$ is a much more
precise bound, because when graphs have few
edges (e.g. no node has more than four children) $e$ is much
smaller than $n^2$. This is the kind of analysis we'll usually
want with graph algorithms.
Note: $O(f(n) + g(n))$ is just a synonym for $O(\max(f(n),g(n)))$. So $O(n + e) = O(\max(n,e))$.