,
$M_2$:
This might help you in reviewing the HW. It shows that if you follow the process from the proof-of-correctness for the concatenation algorithm, you really do produce strings $u$ and $v$ such that $w = uv$ along with accepting computations for $M_1$ on $u$ and $M_2$ on $v$. Does that help you understand why that proof really should be convincing?
Suppose I wanted to construct a finite automaton to accept the languages {akbk | 0 ≤ k ≤ 1}, {akbk | 0 ≤ k ≤ 2}, and {akbk | 0 ≤ k ≤ 3}. It turns out not to be so hard .. although a bit tedious.
![]() |
![]() |
![]() |
| {akbk | 0 ≤ k ≤ 1} | {akbk | 0 ≤ k ≤ 2} | {akbk | 0 ≤ k ≤ 3} |
Presumably you see the writing on the wall: the language of strings of the form akbk for k up to some bound can be accepted by some finite automaton, but the number of states keeps growing as the bound grows. If you're clever you should be able to see how many states would be needed if k was bounded by, for example, 100. But what about the language {akbk | 0 ≤ k}? In other words, what if we didn't have any upper bound on k? Well then we'd need infinitely many states, and that's precisely what we're not allowed to have with a finite automaton! And that, though it's not a proof, is is the intuition behind why {akbk | 0 ≤ k} cannot be accepted by a finite automaton.
It turns out to be pretty easy to do this kind of thing. For example, if L is a language let's let double(L) denote the language of doubles of strings in L, i.e. double(L) = { ww ∈ Σ* | w ∈ L}. Then it turns out that double(L) cannot necessarily be accepted by a finite automaton, even if L can. For example if L = {w ∈ {a,b}* | |w| is even}, then double(L) cannot be accepted by any finite automaton. Same problem as before: a machine would need infinitely many states to "remember" the first half of the string to make sure the second half matched for strings of any size. How about strings over the alphabet {a,b,:} such that the strings of a's and b's betwen :'s are in srted order? Also not acceptable with a finite automaton!
Anyway, the moral of the story is that when we prove that, for instance, the Kleene Star of a language acceptable by a NDFA is also acceptable by a NDFA, we should surprised and impressed!
Proposed theorem: If a language L is accepted by a nondeterministic finite automaton, then L is accpeted by a deterministic finite automaton.
Proof requires an algorithm with specification:
Input: Nondeterministic Machine M = (Q,Σ,Δ,s,W)
Output: Deterministic Machine M' such
that L(M') = L(M)
Notice, I haven't given the algorithm here, only the specification of the algorithm. If I can provide an algorithm that meets this specification, I've proved the theorem. Hopefully it's clear to you why such an algorithm would be a proof (quotes from the theorem statement are in red):
If a language L is accepted by a nondeterministic finite automaton then there is a NDFA that accepts it, which we'll call M. We run our algorithm with M as input, and get back a DFA M' that accepts L(M) = L. Thus, L is accepted by a deterministic finite automaton.
Proof requires an algorithm with specification: ?
Proof requires an algorithm with specification: ?
Proof requires an algorithm with specification: ?
Proof requires an algorithm with specification: ?
Proof requires an algorithm with specification: ?
Note: In Problem2 from the last homework you faced the same kind of problem as in the blow activity. You were supposed to prove that if lanugage L is accepted by an NDFA, then "chop of L" is accepted by an NDFA. In going over this, we noted that for any specific language (e.g. $L =\{aab,bcaa\}$) we could prove that "chop of L" is accepted by an NDFA by acutally drawing a machine that accepts "chop of L". Of course if we move on to a different "language $L$", we have to come up with a different NDFA ... the machine we need for "chop L" is tailored to $L$. So, to prove that for every language $L$ accepted by an NDFA, it's true that "chop of L" is accepted by an NDFA we have to automate the process of producing the machine for chop $L$. This way we know that for any specific language $L$ anyone could every come up with, the NDFA for "chop L" exists - because we could build it by following the algorithm.