**Reading**-
Sections 5.1 and 5.2 of
*Theory of Computing, A Gentle Introduction*. **Homework**- Printout the homework and answer the questions on that paper.

**Church-Turing Hypothesis**-
Last class we showed that extending our Turing machine model
to allow a 2-way infinite tape (i.e. a JFLAP machine)
does not add to the model's computational power. People have
dreamed up many other extensions: multiple tapes, multiple
tape heads on each tape, random access Turing machines, memory
grids instead of tapes, and so on. But each of these
extensions can be simulated with our simple model, just as we
simulated JFLAP's 2-way infinite tape, so that the
computational power of the extended modles --- i.e. the
languages that can be decided and semi-decided --- is just the
same as our simple model.
No matter how we try to extend our TM model, it never seems to increase its computational power. In fact, other completely different computational models have been proposed, and none of them are any more powerful than our basic Turing machine. This leads to the Church-Turing Hypothesis which, roughly, says this: "If a Turing machine can't compute it, nothing can!"

More precisely, the Church-Turing hypothesis says that a Turing machine can be taken as the definition of what it means to compute --- i.e. by definition what a TM computes is what's computable --- because any computational device can be simulated by a Turing machine. This is not a mathematical fact, it's a philosophical statement. It's a definition of a term (computable) which, otherwise is vague. In the next few classes, we'll show that there are problems that cannot be computed by Turing machines, which means there are problems that are "uncomputable".

**Machines vs. Programs**-
Each Turing machine is a special purpose device. Every new
problem we come across requires us to construct a different
Turing machine. This is not the way our computers work.
In our intro cs course, did you build a new computer for each
new problem? Was there a "hello world" computer? No! We
have one, general purpose computer. For each new problem we
come across we create a new
*program*, not a new machine. We'd like to do the same thing with Turing machines. We'd like to have a one general purpose, Universal Turing machine and use it to simulate all of those special purpose Turing machines we've been creating. To do that, however, the special purpose Turing machine must be encoded in symbols and given to the Universal machine as part of its input. So our big task is to figure out how to encode any and every turing machine in some fixed alphabet. We might as well fix the alphabet to the simplest one we know: {0,1}. **Encoding**- Inside a real computer, everything eventually gets represented as a
simple string of 0's and 1's. We're going to do the same thing with
out Turing Machines. We won't be able to use ASCII encodings
though, or even Unicode ... there's no limit on the number of
different characters that could potentially appear on a Turing
machine tape. So we'll use a different encoding.
To make things clear, let's fix names for every possible character, and while we're at it, every possible state. Let

*S = {a*be the set of all possible symbols for a machine. Let_{0}, a_{1}, a_{2}, ...}*A = {q*be the set of all possible states for Turing Machines. Thus, any machine_{0}, q_{1}, q_{2}, ...}*M = (Q,Σ,Γ,δ,s)*will satisfy*Σ ⊂ S*and*Q ⊂ A*.We'll let

*e*be our ``encoding function'', which takes any object and encodes it as a string of zeros and ones. First let's look at characters:*e( ) = 0*

e(a_{i}) = 0^{i+2}*e(h) = 0*

e(q_{i}) = 0^{i+2}*e(S) = 0*

e(L) = 00

e(R) = 000At this point, we have a scheme for encoding the building blocks of a machine, but the real heart of the machine is . If transition in a TM is , we encode as

Now we can encode a Turing Machine. Let

*T*be a Turing Machine with initial state and ``transitions'' (i.e. values of ) . We encode as:

Since we've come so far, I should also point out that we could encode an input string as:

To summarize:

**Encoding Turing machines:**

Let {a0,a1,a2,...} be the set of all symbols (excluding ) that may appear on a TM tape.

Let {q0,q1,q2,...} be the set of all states (excluding*h*) that may be part of a TM.

Symbols, states and tape-head moves get encoded by the encoding function*e*as follows:Symbols States Moves *e( ) = 0*

e(ai) = 0^{i+2}*e(h) = 0*

e(qi) = 0^{i+2}*e(S) = 0*

e(L) = 00

e(R) = 000*t := δ(p,x) = (q,y,m)*gets encoded as:*e(t) = e(p)1e(x)1e(q)1e(y)1e(m)1*.

The machine*T*with start state*q*and transitions*t1,t2,..., tk*gets encoded as:*e(T) = e(q)1e(t1)1e(t2)1...1e(tk)1*.

Input*z := a*gets encoded as_{i1}a_{i2}... a_{in}*e(z) = 11e(a*._{i1})1e(a_{i2})1 ... e(a_{in})1

Machine*T*and input*z*get encoded as*e(T)e(z)*. **Example of Encoding Machines**- Let
*T*be the Turing Machine:0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 0 0 1 1 0 0 1 0 1 0 1 0 0 1 0 1 1 \__/ \____________________________/ \______________________/ start transition 1 transition 2 state

**Encoding Input**-
We'll want to encode both the special purpose machine and the
input string it's supposed to compute with. Our convention for
encoding the input string will be as follows:
Input string

This input string can then simply be appended to the encoded machine.*w = a*gets encoded as_{i1}a_{i2}... a_{ik}*e(w) = 11e(a*_{i1}) e(a_{i2}) ... e(a_{ik}) **A Complete Example**-
Give the encoding of string
*w = a*and machine_{0}a_{0}a_{0}*M*:

0010010101010110010010001001000110001010001010001100010010010010001111001001001 \_________/ \_______________/ \______________/ \_______________/ \_________/ t1 t2 t3 t4 input

**Recognizing an encoded TM+input**-
When we get around to creating a "Universal" Turing machine
that can read an encoded TM+input and simulate that machine on
that input, we'll have to be able to at least decide whether
the input is a proper encoding according to the above rules.
This is actually easier than you might think. Why? Because
the language of all proper encodings is regular! Here's a
regular language for it:
00+1(00+10+10+10+1(0|00|000)11)*11(00+1)*

Christopher W Brown Last modified: Sun Dec 6 22:51:13 EST 2009