Reading
Sections 1.4 of Theory of Computing, A Gentle Introduction. It's important that you read this as a way to brush up on the mathematical foundations we'll use next lecture.
Homework
Printout the homework and answer the questions on that paper.

Something to keep in mind ...
Remember, a finite automaton is a very limited computing device. After all, a given machine has a fixed, finite amount of memory --- even though that machine can be fed strings that are arbitrarily long. So we have to expect that there are lots and lots of languages out there that no finite automaton accepts. In fact, the languages that are accepted by finite automata comprise a small subset of all possible languages. But it's nice when a language can be accepted by a finite automaton, because then we have such a simple way of recognizing strings in the language.

Manipulating machines I
Consider the following two machines, one from the homework due today and one from the lab last class, both over the alphabet {a,b,c}:
Machine M1 Machine M2
Now using these machines we're going to ask a few questions. Let L(M1) denote the language accepted by Machine M1, and let L(M2) denote the language accepted by Machine M2.

  1. The complement of a language L over an alphabet Σ (written L) is the set of all strings over the alphabet that are not in L. Can we construct a machine that accepts L(M1)? Can we construct a machine that accepts L(M2)? Sure! Here they are:
    Machine M3 accepting L(M1) Machine M4 accepting L(M2)
    After looking at how we did this, we might be inspired to make the following hypothesis.
    Hypothesis: If a language L is accepted by some finite automaton, there exists a finite automaton that accepts L.
    Proof: Consider the following algorithm:
    Input: Machine M
    Output: Machine M' produced by making all the accepting sates of M non-accepting and making all the non-accepting states accepting.
    I claim that it's obvious that the language accepted by M' is the complement of the language accepted by M, i.e. I claim that L(M') = L(M). So, going back to our hypothesis, give the name M to the machine accepting L. From input M, the above algorithm produces machine M' that accepts L, which proves the hypothesis.

    This isn't quite a proof that a pedantic mathematician would accept, because we haven't proved our claim that the machine M' produced by our algorithm accepts L(M). On the other hand, while the hypothesis is not what most people would consider self-evident, the correctness of the algorithm is. So for most people, the algorithm is enough of a proof. If you don't find it convincing, here's a proof that the algorithm does what I say it does:

    Proof that the algorithm produces machine M' that accepts L.
    Since the states, start state and transitions of M and M' are the same, after processing a given string w both machines will end up in the same state, let's call it "qi". If qi is an accepting state in M it is non-accepting in M', and if qi is a non-accepting state in M it is accepting in M'. Thus, the two machines make opposite decisions for any input string w, which proves that L(M') = L(M).

    There are a few important points here. First of all, notice that how much proof is required depends on what is "self-evident", and that might change from one person to the next. In the end, proof is all about convincing someone. Secondly, the process that we just went through is what most of this course is going to be about: notice a pattern, make a hypothesis, prove the hypothesis. In fact, most proofs will look like the above: the proof is an algorithm and --- if you're picky/precise enough, or if the algorithm is complicated or subtle enough --- a proof of the algorithm's correctness.

  2. The intersection of two languages L1 and L2 is the language of all strings that are in both L1 and L2. So, for example, can we construct a machine that accepts the intersection of L(M3) and L(M2)? Sure! However, this takes a little more work and a little more thought. You see, intuitively what we need to do is run both M3 and M2 simultaneously. When the input is done, we'd require both machines to be in accepting states. However, we are asking for a single machine that does the same job. The answer is to have one machine in which each state stands for a pair of states, one from M3 and one from M2. Machine M5 below does exactly this. Notice that each state is labeled with a pair of states. The state labeled q2p0 corresponds to M3 being in state q2 and M2 being in state p0. Notice that what we really have here is two copies of machine M3, one corresponding to state p0 of M2 and one corresponding to state p1 of M2.
    Machine M5 accepting the intersection of L(M3) and L(M2) Machine M5 rearranged to look cleaner!
    For example, the transition from the state labeled q1p1 to q2p0 if c is read is translated from the following:
    Machine:   M3  M2                   M5
               --  --                  ----
    In state:  q1  p1     ------->     q1p1
    
               |   |                     |
    Reads:     c   c                     c
               |   |                     |
               V   V                     V
    
    Moves to:  q2 p0                   q2p0
    
    Can you see how the accepting states were selected? Once again, the way we accomplished this makes us think that we could do the same for any two machines, which leads us to the following:
    Hypothesis: If languages L1 and L2 are accepted by finite automata, there exists a finite automaton that accepts the intersection of languages L1 and L2.

    We could prove this the same way we proved our hypothesis concerning complement languages, by describing an algorithm that would take two machines as input and provide as output a new machine that accepted the intersection of the languages accepted by the two input machines. However, we're going to have to do some more work before we're able to define complex algorithms like this precisely.

  3. So what about unions?


Christopher W Brown