These lecture notes!
Homework
Printout the homework and answer the questions on that paper.

An example of a formal proof of correctness for an algorithm: The intersection algorithm
Our proof that if L1 and L are accepted by DFAs, then L1 ∩ L2 is accepted by a DFA is an algorithm that explicitly constructs a machine accepting L1 ∩ L2. To be complete, however, the proof needs to include a proof-of-correctness for the algorithm. I.e. we need to prove that the "Intersection algorithm" produces a machine that accepts the intersection of the languages accepted by the input machines.
Algorithm:Intersection
Input: DFA M1 = (Q1,Σ,δ1,s1,W1) and DFA M2 = (Q2,Σ,δ2,s2,W2)
Output: DFA M such that L(M) = L(M1) ∩ L(M2)
M = (Q,Σ,δ,s,W), where
• Q = Q1 x Q2
• δ:Q x Σ → Q
δ((q,p),a) = (δ1(q,a),δ2(p,a))
• s = (s1,s2)
• W = W1 x W2

Theorem: Algorithm Intersection meets its specification, i.e. L(M) = L(M1) ∩ L(M2).
Proof: The idea behind the algorithm is that a computation in M is supposed to simulate computations in M1 and M2 simultaneously. We make that precise with the following lemma (little theorem) that shows that one step of computation in M is equivalent to one step from M1 and one step from M2 simultaneously.

Lemma: ((p1,p2),u) ⇒M ((q1,q2),v) if and only if (p1,u) ⇒M1 (q1,v) and (p2,u) ⇒M2 (q2,v).
Proof: By the defintion of $\Rightarrow_M$ we have where $u = xv$, $x \in \Sigma$.
• Suppose ((p1,p2),u) ⇒M ((q1,q2),v). Then δ((p1,p2),x) = (q1,q2), so δ1(p1,x) = q1 and δ2(p2,x) = q2, which means (p1,u) ⇒M1 (q1,v) and (p2,u) ⇒M2 (q2,v).
• Suppose (p1,u) ⇒M1 (q1,v) and (p2,u) ⇒M2 (q2,v). then δ1(p1,x) = q1 and δ2(p2,x) = q2, so δ((p1,p2),x) = (q1,q2), which means ((p1,p2),u) ⇒M ((q1,q2),v).

To prove our theorem, we need to show that $w \in L(M)$ if and only if $w \in L(M1)$ and $w \in L(M2)$. That means two parts:

• Part 1: Show that if $w\in L(M)$ then $w \in L(M1)$ and $w \in L(M2)$.
Since $w \in L(M)$, we know that $((p_1,q_1),w_1) \Rightarrow_M ((p_2,q_2),w_2) \Rightarrow_M \cdots \Rightarrow_M ((p_k,q_k),\lambda)$ where $w = w_1$ and $p_k \in W_1$ and $q_k \in W_2$. Applying the lemma to each $\Rightarrow_M$ we get that the following is a computation in $M1$ $(p_1,w_1) \Rightarrow_{M1} (p_2,w_2) \Rightarrow_{M1} \cdots \Rightarrow_{M1} (p_k,\lambda)$ which means $w \in L(M1)$, and in $M2$ we have the computation $(q_1,w_1) \Rightarrow_{M2} (q_2,w_2) \Rightarrow_{M2} \cdots \Rightarrow_{M2} (q_k,\lambda)$ which means $w \in L(M2)$.
• Part 2: Show that if $w \in L(M1)$ and $w \in L(M2)$ then $w\in L(M)$.
The proof of this is just the reverse of the "Part 1" argument. This works because our lemma is "if and only if".

QED.

A note about degrees of "proof"
(Repeating from last class ...) We professors tend to act as if something either is a "proof" or it's not. In fact, there are degrees of proof --- or perhaps more accurately there are degrees pickiness. Do you provide precise details of everything, or do you gloss over these details. In this class there tends to be two levels of proof: the higher level, or "bigger picture" type of proof is like our algorithms. The lower level, nitpickier type also provides correctness proofs for the algorithms. Even in the nitpickier level, there's degrees: do I go all out with induction, or do I sketch out the arguments like I did in the above example?

The most sincere comment I can make about degrees of proof is this: when you're really convinced, that's enough detail. This, of course, requires that you are sincere in evaluating when you're convinced. Hopefully, with the algorithms we've done so far you were convinced when you truly understood how the algorithm worked. Further proof, like the one above, probably seemed pointless. That's why I'm going to put most of the emphasis on the higher level view of understanding the algorithms. However, be aware that it's easy to convince yourself of things that aren't true, so there is value in getting down to all the details and working out every last one. When that's been done, you're really sure.

Let's be lazy and sloppy ... part 1
Suppose we allow ourselves to be a little sloppy and lazy in how we define machines. Can we still make sense of things? For example, what about the following machine, M1:

It's not a proper deterministic finite automaton, because it has no transitions from q0 for characters a or b. On the other hand, it's pretty easy to figure out what I mean. If you ever encounter a "missing" transition, just reject the string! We've been putting "error" states or "trap" states to do the same kind of thing. At any rate, we can be lazy and sloppy this way, and still make sense of the machine.

Now, what does this machine do? It accepts the language of all strings over {a,b,c} that start with a c and have an even number of a's and b's.

Let's be lazy and sloppy ... part 2
Suppose we allow ourselves to be a little sloppier and lazier. For example, I want a machine that accepts the same language as accepted by M1, but with the additional requirement that all strings must end with c as well as begin with c. I might modify M1 to get the machine M2:

This is really not a proper finite automaton --- even if we allow the sloppy practice of leaving out transitions. Why? Because now we have two transitions for state q1 and symbol c. This isn't allowed, and for a very good reason: now the computation is ambigous. Where do I go when I read a c in state q1. For example, for input cacbcc I have three possible computations:
(q0,cacbcc) => (q1,acbcc) => (q2,cbcc) => (q2,bcc) => (q1,cc) => (q1,c) => (q1,λ)
(q0,cacbcc) => (q1,acbcc) => (q2,cbcc) => (q2,bcc) => (q1,cc) => (q1,c) => (q3,λ)
(q0,cacbcc) => (q1,acbcc) => (q2,cbcc) => (q2,bcc) => (q1,cc) => (q2,c) => stuck

Only the second of these actually accepts the string (which, by my description, we'd really like to accept). So, computation by a machine like this is non-deterministic, i.e. we can't tell exactly what the computation will be for every input. Still, can we come up with a sensible definition of what strings such a machine accepts and what strings it rejects? How about this:
A non-deterministic finite automaton accepts a string if there is a way for it to end up in an accepting state. Only if there is no way for it to end up in an accepting state does it reject its input string. Another way to view this is to say that the machine always makes lucky guesses about which transition to take when there is more than one possiblity.
This is a definition that makes sense for M2, and would makes sense for any non-deterministic machine. Now, to tidy up a bit we have to admit that M2 doesn't do exactly what it's supossed to, because it doesn't accept the string c, which is indeed a string that starts and ends with c and contains an even number of a's and b's. The following machine, M3, fixes this:

Let's be lazy and sloppy ... part 3
Consider the machine M4:

This accepts the language of all strings that end with c. Now, suppose we want a machine accepting strings that start with c and have an even number of a's and b's or ending in c's. In other words, suppose I want a machine accepting the union of the languages accepted by M1 and M4. What about a machine like this?

What does this mean? Well, what I mean is an λ-transition is one the machine can take without reading in any input. In this case, the machine will essentially choose between running machine M1 or machine M4 before it starts reading input. Of course, the machine could choose to simply choose to stay in its start state and start reading characters. In that case, however, it'd be stuck since there aren't transitions for any of a, b or c. Here are all the possible computations on input cacc:
(q0,cacc) => (q1,cacc) => (q3,acc) => (q5,cc) => (q5,c) => (q5,λ)
(q0,cacc) => (q2,cacc) => (q2,acc) => (q2,cc) => (q2,c) => (q4,λ)
(q0,cacc) => (q2,cacc) => (q2,acc) => (q2,cc) => (q4,c) => STUCK
(q0,cacc) => (q2,cacc) => (q4,acc) => STUCK

Since one of these computations (the second one) leaves us in an accepting state, the string is accepted!

Summing up Non-Deterministic Finite Automata
Basically, a non-deterministic finite automaton is like a deterministic automaton except you can
1. have a state/symbol combination with no defined transition,
2. have a state/symbol combination with transitions to more than one state, and
3. have λ-transitions, i.e. transitions the machine can take without reading an input symbol.
Since many different computations are possible for a single input on a non-deterministic machine, and since they won't all necessarily agree on whether the input should be accepted, we need a new definition of what it means for a machine to accept a string. Our new definition is that a non-deterministic machine accepts a string as long as there is some way that it can process the string and end up in an accepting state.

Christopher W Brown