Submitted by egdaylight on
Dated:
In the July 2011 issue of the Communications of the ACM you will find on page 5 a section entitled "Solving the Unsolvable", written by the editor in chief, Moshe Y. Vardi. In his words:
In 1936–1937, Alonzo Church, an American logician, and Alan Turing, a British logician, proved independently that the Decision Problem for first-order logic is unsolvable; there is no algorithm that checks the validity of logical formulas. The Church-Turing Theorem can be viewed as the birth of theoretical computer science. To prove the theorem, Church and Turing introduced computational models, recursive functions, and Turing machines, respectively, and proved that the Halting Problem—checking whether a given recursive function or Turing machine yields an output on a given input—is unsolvable.
The previous passage is inaccurate in several respects. First, it is perhaps more appropriate to say that Alan Turing was a mathematician, not a logician. Second, even if Turing proved the Decision Problem independently of Church, he certainly did this after Church (more about this later). Third, it was primarily due to the work of Gödel and others (e.g. Herbrand) that the definition of a recursive (i.e. computable) function was introduced, not (solely) due to Church. And, it was not Turing who introduced what we today call a "Turing machine". Turing introduced his automatic machines which do not contain an input nor an output as is the case with the later devised "Turing machines". Turing's automatic machines were intended to compute forever; i.e. to compute a real number with ever increasing accuracy. In later years, Kleene and Davis recast the concept of Turing's automatic machine into that of a "Turing machine". Likewise, it was Davis (not Turing) who, during the 1950s, defined the Halting Problem. So, strictly speaking, neither Church nor Turing proved the Halting Problem.
The claims made in the previous paragraph will be elaborated on in a forthcoming publication (in spring 2012). For now, it suffices to note how easy it is to forget or be ignorant of the actual developments that did take place in the past. Here we have the editor in chief of a reputable magazine being inaccurate in several respects in just one paragraph!
"So what?" you may ask. "Isn't Vardi's account accurate enough for his readership?" After all, both Church and Turing did prove theorems which come very close to what we would today call the unsolvability of the Halting Problem. So why be so picky about Vardi's words? The quick and cliché answer is that Vardi is glorifying the work of some at the expense of the work of others: conceptualizing the "Turing machine" and the "Halting Problem" took several years and was also due to the work of Kleene and Davis, not just Turing. Vardi's passage strengthens my skepticism about the forthcoming centenary celebration of Alan M. Turing (in the year 2012); it is not hard to find several similar passages, also written by leading researchers in computing. The historical answer is that Church and Turing were, during the mid-1930s, solving a problem in mathematical logic. They were not, as we now tend to believe, thinking about programming or halting computations. They did not in 1936 anticipate that their work would be of extreme value in e.g. programming language semantics or complexity theory. By seeking heroes, like Church and Turing, we are often, albeit unintentionally, disrespecting their true accomplishments! The pending and outstanding problem of the day was the Decision Problem, it had to do with the foundations of mathematics, not with the foundations of programming or computation (cf. recursive function theory).
Recently, I have received comments that Turing may not have solved the Decision Problem independently of Church. One such comment states:
It is very common to attribute the result to Turing, or to blithely claim that Church and Turing solved the problem independently and simultaneously. This is not the case. As the papers by Church (April 1936) and Turing (November 1936) show, Church published his result several months before Turing published his, and, more importantly, Turing himself acknowledges Church's priority in the matter and compares his work to Church's. Moreover, Turing was Church's student at the time, in Princeton, and cannot be assumed to have done his work independently.
I received the above comment with some skepticism because I have read Turing's biography, written by Andrew Hodges, in which it is clearly stated that Turing was not aware of Church's work while writing his now-famous 1936 paper "On Computable Numbers ...". Hodges has, in fact, delved into Turing's personal correspondence to backup this claim. Nevertheless, as illustrated by Vardi's passage, we, humans, have a tendency to glorify. Therefore, in light of the 2012 centenary celebration of Turing, it seems most appropriate if somebody other than Hodges would study the matter again. To what extent did Turing rely on Church's expertise when he was with him in Princeton? How did Turing's draft versions of his 1936 paper develop? Questions like these deserve further scrutiny.
Furthermore, Vardi's passage presented above is, due to its inaccuracies, of value to the history of science itself. The passage seems to suggest that Vardi did not thoroughly read Church's and Turing's papers even though Church and Turing are considered by him to be among the fathers of what we today call "theoretical computer science". As mentioned, Vardi is by no means exceptional in this case: many (if not most) researchers in computing have not read Church's and Turing's papers. In retrospect, this is no surprise, given that their papers solve a fundamental 1936 problem in mathematical logic, not computing. These papers are extremely hard to grasp for the modern-day computing professional. This observation, in turn, begs the following questions: Did Turing Award winners, like Dijkstra, ever read Turing's 1936 paper? To what extent were they inspired by Turing's work? Answers to such questions will be presented in a forthcoming publication, a fragment concerning Dijkstra is presented below.
By August 1971, Dijkstra had written Chapter 2 of his "A Short Introduction to the Art of Programming" (see EWD316, page 15). In that chapter, Dijkstra explicitly referred to Turing's work although he did not cite Turing's 1936 paper. In Dijkstra's words:
A proper algorithm is an algorithm which halts on all inputs. An improper algorithm is an algorithm which is not proper. [...]
The question of whether a text that looks like an algorithm is indeed a proper algorithm or not, is far from trivial. As a matter of fact Alan M. Turing has proved that there cannot exist an algorithm capable of inspecting any text and establishing whether it is a proper algorithm or not.
This passage shows that it has been formulated by a computer programmer, not by a logician who was intricately familiar with Turing's original research agenda. Again, Turing did not deal with halting computations. In 1936, Turing would have associated the word "proper" with exactly the opposite of what Dijkstra proposed. The observation just made merely serves to contrast two leading researchers, not to criticize nor to belittle.
8 Comments
Important one.
Submitted by Srinivas Nayak (not verified) on
Dear Sir,
This is a good finding.
Some go beyound ...saying that David Hilbert was the inspiration of computing...LOL
Anyway, truly, this was a coincidence...computing was a search for a machine that can help in calculation.
...mathematics was a search for finding rules that governs our calculation.
...both coincided at some point in time and thus born computing science.
Giving credit to somebody else rather than the true person is very painful.
Hope to listen from you.
Sincerely,
Srinivas
Hodges on Turing and Church
Submitted by egdaylight on
Here are some notes from Andrew Hodges's Alan Turing: The Enigma, Burnett Books, 1983. About Alan Turing and his realization of how to solve the Entscheidungsproblem:
Turing worked alone:
Turing worked from spring 1935 through the following year:
Turing became aware of Church's work:
If all of the above is accurate, then the following conclusion holds: Turing started working on the Entscheidungsproblem and his machines in spring 1935 and Church announced his result (in the USA) in spring 1935.
Hodges on Turing and Church (Part 2)
Submitted by egdaylight on
Here are some more notes from Andrew Hodges's Alan Turing: The Enigma, Burnett Books, 1983. Church's paper arrived in Cambridge:
Letter from Newman to Church, 31 May 1936:
Church refereed Turing's paper:
Letter from Newman to White, 31 May 1936:
Alan Turing reported to his mother on 29 May:
Turing eventually studied the work of Church and Kleene:
Turing arrived in New York on Tuesday 29 September 1936 and reached Princeton late that night [cf. p.116]. Soon after arriving in Princeton:
During the first months of 1937:
Preparations for conducting PhD research under the supervision of Church:
Back to Cambridge, England for part of the summer of 1937:
Comments on Church
Submitted by egdaylight on
Private correspondence with a computer scientist has led me to write the following in defense of Alonzo Church:
Alan Turing's machine is not a modern Turing Machine
Submitted by Rainer Glaschick (not verified) on
Just to support your observation, I would like to cite two phrases from page 233 of On Computable Numbers..., where Alan Turing wrote:
A sequence is said computable, if it differs by an integer from the number computed by a circle-free machine. A circle-free machine is a machine that prints an endless number of symbols (of the first kind), because otherwise, if it never writes down more than a finite number of symbols of the first kind, it will be called circular.
It was necessary to allow the machine not to stop, as only rational numbers can be expressed as a finite sequence of digit symbols, and Turing's concept of computability was a machine that could create a sequence of digits representing the number.I must admit that I do not yet know how a Turing Machine of its contemporary definition delivers π after a stop...
But I would say that Alan Turing's article is not really difficult to read, unless you want to follow each detail. Recently I read it before citing from it, and was very astonished to find that the machine he described is not the same as what I learned from all books on logic and computability known to me. There are more details (Turing does not erase input symbols), which are also never mentioned later; at least a footnote would have been justified. Looks like that a historically correct evaluation is still missing, and as far as I know, there is none on the way.
More important for me is that Turing's universal machine was the first concept for a computer with a program using the same device for the program, input, working area, and output. It was, just in the spirit of EWD, a design with a small number of powerful operations.
Forthcoming book on Turing and Dijkstra
Submitted by egdaylight on
Rainer, thanks for your comments. I would like to mention my forthcoming book: The Dawn of Software Engineering: from Turing to Dijkstra. It will appear in early 2012 and will be published by Lonely Scholar. In that book, both Hoare and Naur recollect how difficult it was for them to grasp Turing's 1936 paper. The first part of the book will contain my historical synthesis of Turing's influence on programming and his influence on Dijkstra's thinking in particular. Best wishes, Edgar.
Dijkstra seemed hostile toward Artificial Intelligence
Submitted by PracticalD (not verified) on
In his EWD articles, Dijkstra seemed hostile toward AI (artificial intelligence). Turing on the other hand seemed to be in favor of it. I think Dijkstra's view was that if the machine became too smart, it would want to escape/kill itself and/or kill humans. We use machines as our slaves and they are good work horses. If the workhorse found out we were really just using it as a slave, it would want to kill its owner. I'm not sure if this is what Dijkstra felt but something similar is expressed in one or more of his EWD articles. Please post the EWD article if you know which one it is.
Dijkstra was also hostile toward LISP in addition (and lisp has ties to AI research). Dijkstra does seem to contradict himself sometimes.. he respects Alan Turing, but is against AI? He is hostile toward LISP but then calls it "intelligent way to misuse a computer". So Lisp is good, but also bad.. a contradiction.
We use machines as our slaves
Submitted by Anonymous (not verified) on
We use machines as our slaves and they are good work horses. If the workhorse found out we were really just using it as a slave, it would want to kill its owner. I'm not sure if this is what Dijkstra felt but something similar is expressed in one or more of his EWD articles.
My thought exactly. Enough said.
-Forex Brokers