What is a Turing machine and why do people keep mentioning it? My IBM PC is all I need to do my computation! Why does anyone care about these machines?
A Turing machine is an abstract machine capable of computation.
From Wikipedia:
Turing machines are basic abstract symbol-manipulating devices which, despite their simplicity, can be adapted to simulate the logic of any computer algorithm. They were described in 1936 by Alan Turing. Turing machines are not intended as a practical computing technology, but a thought experiment about the limits of mechanical computation. Thus they were not actually constructed. Studying their abstract properties yields many insights into computer science and complexity theory.
A Turing machine that is able to simulate any other Turing machine is called a Universal Turing machine (UTM, or simply a universal machine). A more mathematically-oriented definition with a similar "universal" nature was introduced by Alonzo Church, whose work on lambda calculus intertwined with Turing's in a formal theory of computation known as the Church-Turing thesis. The thesis states that Turing machines indeed capture the informal notion of effective method in logic and mathematics, and provide a precise definition of an algorithm or 'mechanical procedure'.
Turing machine is an abstract machine that can operate on a sequence of data and can change its own state as well as the data while operating, according to some logic.
This is a concept that forms the basis of algorithms, stored programs, and computation in general. It provides good insights and abstractions if you are dealing with algorithms, states, data etc.
Food for thought, for most.
The reason that Turing Machines are a big deal has to do with the study of classical Computing Science or Theory of Computation type stuff. It's basically about analyzing the general properties of a computer, such as what theoretical abilities and limitations a computer has, as well as what we mean when we talk about "computing" something.
One example of something that one might study using Turing Machines is The Halting Problem. While this problem is something of an academic exercise, it has easily tangible real-world implications. Why not write a debugger that will simply tell you whether or not your program contains any infinite loops? The Halting Problem establishes that solving this problem for the general case is impossible.
The study of Turing Machines also lends itself to studying language grammars and classes of thereof, which leads into programming language development. The term "regular expressions" comes about because they are a regular grammar, and the study of these grammars (part of Theory of Computation) will tell you more about exactly what kinds of problems regular expressions can solve and what they can't. For example, a traditional regular expression syntax won't be able to solve the following problem: parse some number N of 'a' chars in input, and then parse the same number N of char 'b'.
If you're interested in a good text about this sort of thing, check out Introduction to the Theory of Computation by Michael Sipser. It's good.
Why would people who design airplanes care about the Wright Brothers, or the science behind "lift" that lets fixed wing aircraft fly?
Alan Turing is lauded as the father of modern computing. The Turing Machine is the precursor to all modern computers.
The Theory of Computability was my hardest class in college, but I'm glad I took it. It made me think about things I never would have, or think about things in ways I never would have, and those are good things.
A Turing-machine is a theoretical machine that can be used to reason about the limits of computers. Simply put, it is an imaginary computer with infinite memory.
We care about Turing-machines because they help us discover what is impossible to accomplish with real computers (like your IBM PC). If it is impossible for a Turing machine to perform a particular computation (like deciding the Halting Problem), then it stands to reason that it is impossible for your IBM PC to perform that same computation.
There are actually examples of Turing Machines in nature. Specifically, the ribosome, which translates RNA into proteins, implements a Turing Machine.
First, some background:
- RNA is composed of a string of nucleotides ("bases") which define the letters of the genetic alphabet.
- There are 4 bases in the RNA alphabet - A, C, G, U.
- Bases are directional: by convention the ends are called five-prime and three -prime (5', 3')
- A base in an RNA string can attract a base on another RNA string in "complementary pairs", where A sticks to U and C sticks to G.
- The bases are combined in groups of 3 to form "codons" (words).
- There are 64 possible combinations for the codons (4^3).
- each codon can match an "anti-codon". for instance AUG <-> UAC
- there are special carrier molecules ("tRNA") which have particular anticodons and are attached to specific amino acids (proteins).
The operation of the ribosome is simple:
- transcription initiates at a "start codon", which defines the "reading frame"
- transcription always proceeds in the 5'->3' direction
- the codon under the reading frame is matched with a specific tRNA containing a specific amino acid
- the start codon always encodes the amino acid Methionine.
- the new amino acid is attached to the growing protein
- the frame then advances 3 bases to the next codon, and the protein is continuously extended
- upon encountering a "stop" codon, translation is terminated, no amino acid is attached and the ribosome dissociates from the mRNA.
As you can see, this is a very simple Turing Machine that performs the most complex operation - nature itself!
My IBM PC is all I need to do my computation!
Something that others haven't pointed out: Your IBM PC is a Turing machine. More precisely, it is equivalent to it, in the sense that anything your PC can do, a Turing machine can do, and anything a Turing machine can do, your PC can.
Specifically, a Turing machine is a model of computation that completely captures the notion of computability, while remaining simple to reason about, without all the specific details of your PC's architecture.
The (generally accepted) "Church-Turing thesis" asserts that every device or model of computation is no powerful than a Turing machine. So many theoretical problems (e.g. classes like P and NP, the notion of "polynomial-time algorithm", and so on) are formally stated in terms of a Turing machine, although of course they can be adapted to other models as well. (For example, sometimes it can be convenient to think of computation in terms of the lambda calculus, or combinatory logic, or whatever... they are all equivalent in power to each other, and to your IBM PC as well.)
So there you go: people talk about Turing machines because it is a precise and full specified way to say what a "computer" is, without having to describe every detail of the CPU's architecture, its constraints, and so on.
In addition to the Wikipedia entry, you might want to pick up the book The Annotated Turing by Charles Petzold. Subtitled "A Guided Tour through Alan Turing's Historic Paper on Computability and the Turing Machine", it includes the complete paper, broken into chunks with lots of discourse on the topic, including a historical perspective.