Sections 4.2 of Theory of Computing, A Gentle Introduction.
Homework
Printout the homework and answer the questions on that paper.

A formal definition of JFLAP's TM model
A JFLAP TM is 6-tuple M = (Q,Σ,Γ,δ,s,W) where
• Q is the set of states
• Σ is the input alphabet
• Γ is the tape alphabet (□ ∉ Γ)
• δ is the transition function
δ:(Q - W) x (Γ ∪ □) → Q x (Γ ∪ □) x {L,R,S}
• s the start state, s ∈ Q
• W the set of halt states

Of course the important feature of the model is its 2-way infinite rather than 1-way infinite tape. The tuple representing the JFLAP TM doesn't seem to say anything about that. Well, that's because the actual operation of the machine, including such details as 2-way tapes, is captured by the notions of configuration and yields in one step (i.e. ⇒), and they are different for JFLAP TMs than for our TMs.

The notion of configuration for JFLAP TMs is really the same as for our TMs. The only technical difference is that for our TMs we kept track of all characters from the first tape square up to the tape head. For a JFLAP TM that doesn't make any sense because there is no "first tape square". Therefore, we simply track from the tape head left up to the leftmost non-blank character.

A configuration of a JFLAP Turing Machine M = (Q,Σ,Γ,δ,s,W) is a 4-tuple (q,u,x,v) ∈ Q x (Γ ∪ {□})* x (Γ ∪ {□}) x (Γ ∪ {□})* where
• q is the current state,
• u is the string of all characters to the left of the tape head up to and including the first non-blank cell on the tape (note that u is the empty string if every cell to the left of the tape head is blank),
• x is the symbol in the cell pointed to by the tape head, and
• v is the string of all symbols to the right of the tape head up to and including the symbol in the last non-blank cell (note that v is the empty string if every cell to the right of the tape head is blank).

The "yields in one step" relation is where the rules of a machine are defined - and where JFLAPs 2-way infinite tape enters the picture.

For JFLAP Turing Machine M = (Q,Σ,Γ,δ,s,W) configuration (q,u,x,v) yields (q',u',x',v') in one step if, letting δ(q,x) = (p,y,d):
(q',u',x',v') =
 (p,u,y,v) if d = S (p,uy,v1,v2...vk) if d = R and v = v1v2...vk (Note: if u = λ and y = □, replace uy with λ) (p,uy,□,λ) if d = R and v = λ (Note: if u = λ and y = □, replace uy with λ) (p,u1...um-1,um,yv) if d = L and u = u1u2...um (Note: if v = λ and y = □, replace yv with λ) (p,λ,□,yv) if d = L and v = λ (Note: if v = λ and y = □, replace yv with λ)
The 2-way infinite tape is reflected by the fact that a left move is allowed even if u=λ. If you go back to the definition of yields-in-one-step for our Turing Machines, you'll notice that there is no "next configuration" defined for a left move with u=λ.

Our TM is as powerful as JFLAP's TM: i.e. a 2-way infinite tape doesn't buy you anything
The basic difference between JFLAP's Turing Machine model and ours is the 2-way infinite tape. When you think about it, it seems like a TM with a 2-way infinite tape should be more powerful. After all, it's got more options. What we're going to do is show that, in fact, JFLAP's model is not more powerful. Given a JFLAP TM, we can construct one of our TM's that is equivalent, i.e. decides or semi-decides the same language. We'll do this by simulating a 2-way infinite tape on our 1-way infinte tape.

Imagine dividing your tape horizontally, so each cell has an upper and lower part. This allows us to simulate a 2-way infinite tape, because characters in the upper part will be interpreted as belonging to the right half of the simulated 2-way tape, and characters in the lower part will be interpreted as belonging to the left half of the simulated 2-way tape. Now, we'll need to know where the left end of our 1-way tape is, so as not to fall off and crash the machine, so we'll assume the first square has some distinct, unique symbol (like $) that doesn't appear anywhere else. The following figure shows how we simulate a 2-way tape with our 1-way tape: Now, one problem is this: When the tape head is pointing to a cell on the tape, we have no way of knowing whether it's supposed to be referring to the upper symbol in that cell or the lower symbol. We'll have to use states to keep track of whether we're looking at the "right half" of our 2-way tape or the "left half". So each state will have a +1 or -1 tag, indicating that we're currently "in" the right half or left half, respectively, of the 2-way tape. This is handy not just for figuring out which symbol you're reading, but also because an L-move in the left half of the 2-way tape corresponds to an R-move on our 1-way tape, and an R-move in the left half of the 2-way tape corresponds to an L-move on our 1-way tape. So here's an algorithm that takes a JFLAP TM and constructs one of our TM's to simulate it. We're assuming that the input is given to us with a$-prefixed to the front. Ultimately we'll simply run the prefix machine from above to do that for us before we run our simulating machine. Since we can't literally split each of our machine's cell's in two, we instead add new "symbols" that are really ordered pairs of symbols from the JFLAP TM. The left symbol of the ordered pair represents the "upper" symbol, and the right represents the lower symbol.

Input: JFLAP TM M = (Q,Σ,Γ,δ,s,W)
Output: TM M' = (Q x {+1,-1}, Σ∪{$\$$}, Γ∪{\$$}∪(Γ∪{□})2, δ',(s,+1)), where δ((q,d),x) =  ((q,-d),$\$$,R) if x = \$$ ((q,d),(x,□),S) if x ∈ Γ ∪ {□} ((q',d),(x'1,x2),M) if d = +1, where x = (x1,x2) and δ(q,x1) = (q',x'1,M) ((q',d),(x1,x'2),M') if d = -1, where x = (x1,x2), δ(q,x2) = (q',x'2,M), and M' is S if M=S, R if M=L, L if M=R. (h,x1,S) if q ∈ W and d = 1, where x = (x1,x2) (h,x2,S) if q ∈ W and d = -1, where x = (x1,x2)
In class we'll go over this algorithm in detail, including walking it through for the following input JFLAP TM:
This machine swaps all the a's and b's in a string, using the 2-way nature of the tape to move the tape head back to the beginning of the string. This isn't interesting in terms of a language decided or semi-decided, but shows how the 1-way tape simulates the 2-way tape, which is the important point here.

Theorem: JFLAP's TM model and our TM model have the same computational power.
Proof: The above algorithm shows that our model can be used to decide/semi-decide any language that can be decided/semi-decided by JFLAP's model. It's easy use JFLAP's TM model to simulate a machine of our TM model, so our JFLAP's model can be used to decide/semi-decide any language that can be decided/semi-decided by our model.

Now, to really prove that the above algorithm works we need to consider the sequence of configurations that the JFLAP TM M goes through in a computation versus the sequence of computations that TM M' goes through. What we saw in class was that M' goes through a longer sequence of configurations, but that its sequence of configurations includes configurations that map to each configuration in the sequence of configurations M goes through. Thus, M halts on input w if and only if M' halts on input w (with the \$ prefixed), and M halts with a □ (or non-□) where the tape head is pointing if and only if M' halts with a □ (or non-□) where the tape head is pointing.

$\begin{array}{|r|r|} \text{computation in M'}&\text{computation in M}\\ \hline \left( (q_0,+1) , \ , a , \lambda \right) \Rightarrow_{M'}&\\ \left( (q_0,+1) , \ , (a,□) , \lambda \right) \Rightarrow_{M'}& (q_0,\lambda,a,\lambda) \Rightarrow_M\\ \left( (q_0,+1) , \ (b,□), □ , \lambda \right) \Rightarrow_{M'}&\\ \left( (q_0,+1) , \ (b,□), (□,□) , \lambda \right) \Rightarrow_{M'}& (q_0,b,□,\lambda) \Rightarrow_M\\ \left( (q_1,+1) , \ , (b,□), (□,□) \right) \Rightarrow_{M'}& (q_1,\lambda,b,\lambda) \Rightarrow_M\\ \left( (q_1,+1) , \lambda, \ , (b,□)(□,□) \right) \Rightarrow_{M'}&\\ \left( (q_1,-1) , \ , (b,□), (□,□) \right) \Rightarrow_{M'}& (q_1,\lambda,□,b) \Rightarrow_M\\ \left( (q_2,-1) , \lambda, \ , (b,□)(□,□) \right) \Rightarrow_{M'}&\\ \left( (q_2,+1) , \ , (b,□), (□,□) \right) \Rightarrow_{M'}& (q_2,\lambda,b,\lambda)\\ \left( h , \ , b, (□,□) \right)& \end{array}$

Christopher W Brown