Submitted by egdaylight on
Dated:
My mentor, the historian Gerard Alberts, has advised me repeatedly during the past four years not to use technological concepts, like `program', `compiler', and `universal Turing machine', as subjects of my sentences. Instead, I should use historical actors. For example, I should not write
During the 1950s, a universal Turing machine became widely accepted as a conceptual abstraction of a computer.
Instead, I should write
By 1955, Saul Gorn viewed a universal Turing machine as a conceptual abstraction of his computer.
If I stubbornly decide to stick to the first sentence and if I write more sentences of that kind, Alberts explained, then my exposition will, at best, capture a development of technological ideas that is detached from the people who shaped the technology in the first place. As a result, my readership, and myself included, won't realize that a universal Turing machine had different meanings for different actors, nor will it become apparent that the meaning of a universal Turing machine changed over time for each individual actor. Gorn, an influential ACM member of the 1950s-1960s, viewed a universal Turing machine quite differently in 1955 than a year or two prior.
Sentences of the first kind can lead to more pitfalls. They sometimes, or often, go along with expositions in which one line of thought dominates the entire narrative. For example, if I choose as subject matter Turing's 1936 paper, or more specifically, the connection between Turing's 1936 universal machine and modern computers, then it becomes very tempting to view the history of computing in terms of those few people who grasped that connection early on. Turing, for example, already saw a connection around 1945. This observation alone, however, does not make him an influential actor in the history of computing. By following Alberts's advice, the historian loses the inclination, the temptation, to paint the history of computing as a stream from Turing's 1936 paper to the stored-program computer. The historian will fail, and this is for the good, in explaining every technological advancement in terms of Turing machines.
The late Michael Mahoney warned his fellow historians of computing not to fall into the trap of viewing everything with the same glasses. The computer, Mahoney said specifically, is not one thing but many different things. He continued as follows:
[T]he same holds true of computing. There is about both terms a descriptive singularity to which we fall victim when, as is now common, we prematurely unite its multiple historical sources into a single stream, treating Charles Babbage's analytical engine and George Boole's algebra of thought as if they were conceptually related by something other than twentieth-century hindsight. [4, p.25-26]
In my own words, then, the multitude of computer-building styles, programming habits, receptions of Turing's work, and so on should be put front and center by computer scientists and historians alike. Terms like “general purpose” and “Turing universal” had different meanings for different historical actors, and their usage as synonyms is only justified if it conforms with the historical context. I take Mahoney's challenge to mean that we need to handle each and every term with great care.
Unfortunately, Mahoney “never took up his own invitation”, says Thomas Haigh in the introduction of Mahoney's collected works [4, p.5,8]. Men like McCarthy, Scott, Strachey, Turing, and von Neumann appear in almost every chapter in Mahoney's collected works, but, as Haigh continued,
[W]e never really learn who these people are, how their ideas were formed and around what kind of practice, what their broader agendas were, or whether anything they said had a direct influence on the subsequent development of programming work. [4, p.8]
Mahoney did occasionally provide some insights about the aforementioned men. Coincidentally or not, the rare passages in which he did also comply with Alberts's writing style, as the following excerpt from Mahoney's oeuvre illustrates.
Christopher Strachey had learned about the [lambda]-calculus from Roger Penrose in 1958 and had engaged Peter J. Landin to look into its application to formal semantics. [4, p.172]
Words like these inform the reader about some historical actors, their social networks, and their research objectives.
In general, however, Mahoney much preferred technological concepts over historical actors. (See my post on Mahoney.) One instructive example comes from his 2002 work where he stated that:
The idea of programs that write programs is inherent in the concept of the universal Turing machine, set forth by Alan M. Turing in 1936. [4, p.78-79, my emphasis]
This statement, which has an idea as subject matter, is anachronistic. Turing's 1936 paper was not at all about programs that write programs. Turing's universal machine did not modify its own instructions, nor did it modify the instructions of the other machines that it simulated.
Instead of alluding to compilers, as Mahoney did, one could perhaps refer to interpreters when attempting to document the importance of Turing's 1936 paper. (After all, interpreters emulate programs — loosely speaking.) The following sentence is, technically, more accurate:
[Turing's] universal machine in particular is the first example of an interpretative program. [1, p.165]
These words come from the eminent logician Martin Davis who has taken great strides in explaining to the layman how important logic is in computer science. The subject in Davis's sentence is Turing's universal machine, not an historical actor. From Alberts's perspective, then, it is tempting to rewrite and — in my opinion — improve the wordings. My first suggestion is to write:
During the early 1960s, John McCarthy viewed a universal Turing machine as an interpretative program.
This sentence limits the scope of Turing's influence by focusing on McCarthy and the years in which he thought about interpretative programs (in the modern sense — or in a more modern sense — of the word).
My second, more elaborate, suggestion is to write:
In the 1950s, when interpreters were built, leading computer programmers like McCarthy did not initially view Turing's universal machine as an interpretative program in a practical sense. Although McCarthy had by 1960 already written a paper [5] in which he had connected the universal Turing machine to his LISP programming system, he did not see the practical implication of LISP's universal function eval. It was his student Steve Russell who insisted on implementing it. After having done so, an interpreter for LISP “surprisingly” and “suddenly appeared”. [6, p.191][10]
The previous fragment informs the reader about McCarthy and his research team, partially capturing how Turing's 1936 universal machine eventually led to McCarthy's interpreter for LISP. The passage does not discard the possibility that others had already made some kind of a connection between a universal Turing machine and programming technology in earlier years. But if one wants to make a general claim about Turing's legacy, then he or she should first do the hard work of finding several specific actors for which the claim holds.
Taken at face value, Mahoney's oeuvre gives the false impression that several influential historical actors, along with himself, thoroughly understood Turing's 1936 paper and the logic literature in general. “Computer science”, Mahoney postulated in 1997, “had come around full circle to the mathematical logic in which it had originated” [4, p.144]. But this claim does not mix well with his own appeal: to avoid falling into the trap of viewing everything with the same, logical, glasses over and over again. Nor does it mix well with several primary sources and oral histories. For example, according to an expert in Separation Logic, the late John Reynolds, understanding the logical literature is “taxing”.
[Reynolds:] I’d better admit that I haven’t read Turing’s 1936 paper. I probably avoid old papers less than most computer scientists, but I wasn’t trained in logic and thus find the subject taxing to read (indeed, more to read than to write). [9]
Men of the stature of Hoare and Naur had difficulty studying Turing's 1936 paper [3]. And Davis's remark at the end of Christos Papadimitriou's talk at Princeton clearly shows that prominent computer scientists today do not fully comprehend Turing's 1936 paper either [7].
In his monumental book A Science of Operations [8], Mark Priestley documented the interaction between theory and practice in the development of computing, starting from the early work of Babbage and ending with the programming language Smalltalk, thereby providing a coverage that has yet to be matched by fellow historians. Based on his work and on some of my own research, I now list six influential publications of the 1950s-1960s. Each publication played an important role in transferring ideas from logic to computing.
- 1950: Turing's Computing machinery and intelligence
- 1950: Rosenbloom's Elements of Mathematical Logic
- 1952: Kleene's Introduction to Metamathematics
- 1954: Markov's Theory of Algorithms
- 1958: Davis's Computability and Unsolvability
- ...
- 1967: Minsky's Computation: Finite and Infinite Machines
The influence exerted by most of the publications listed above has yet to be thoroughly investigated in future work. Turing's scholarly legacy, not to mention that of Emil Post, Alonzo Church and others, has yet to be told!
Priestley has discussed the influence of the first publication at length in `Logic and the Invention of the Computer' [8, Ch.6]. Priestley argued that Turing had little influence in the 1940s (and in computer building in particular) but that his 1950 paper, Computing machinery and intelligence, was “a turning point in the characterization of the computer as a universal machine” [8, p.153]. “After 1950”, he wrote, “it became common to describe electronic digital computers as being instantiations of Turing's concept of a universal machine” [8, p.124]. Furthermore, he writes:
Following 1950, [Turing's 1950] paper was widely cited, and his characterization accepted and put into circulation. [8, p.153]
These words by Priestley give the unfounded impression that most computer practitioners became acquainted with Turing's work during the 1950s. If I now name people who did not grasp, read, or come across Turing's work, Priestley's readership will think that these are exceptions. By the late 1950s, Turing's work had indeed become increasingly popular in some niches of computing (see [2]). However, if something can be stated at all about the majority of computer practitioners, then it is that they either did not yet understand the all-purpose nature of the computer, or they did but without having resorted to (a re-cast version of) Turing's work (see e.g. [3]). Ideally, and following Alberts, neither Priestley nor I should make claims about the majority of computer practitioners in the first place. Instead of taking Turing's 1950 paper as subject, as Priestley has done in the above passage, a historical actor or a well-defined group of historical actors could be chosen instead, thereby limiting the scope of Priestley's claim regarding Turing's influence.
To conclude, then, I urge computer scientists and historians to respect the multitude of receptions of Turing's work. Furthermore, let us view our past, not solely as an application of Turing's 1936 paper, but also as a history of struggling to understand what Turing's 1936 paper has to offer to some, perhaps many, but definitely not all, of us.
Bibliography
[1] M. Davis. Engines of Logic: Mathematicians and the origins of the Computer. New York NY: W.W. Norton & Company, 1st edition, 2000.
[2] E.G. Daylight. Towards a Historical Notion of "Turing — the Father of Computer Science". History and Philosophy of Logic. To appear. I thank Tom Haigh and other reviewers for commenting on multiple drafts of this article, starting in February 2013.
[3] E.G. Daylight. The Dawn of Software Engineering: from Turing to Dijkstra. Lonely Scholar, 2012.
[4] M.S. Mahoney. Histories of Computing. Harvard University Press, 2011.
[5] J. McCarthy. Recursive functions of symbolic expressions and their computation by machine, part I. Communications of the ACM, 3(4):184-195, 1960.
[6] J. McCarthy. History of Programming Languages, chapter `History of LISP' and the transcripts of: presentation, discussant's remark, question and answer session, pages 173-195. New York: Academic Press, 1981.
[7] C.H. Papadimitriou. The Origin of Computable Numbers: A Tale of Two Classics. May 2012. Presentation at the Turing Centennial Celebration at Princeton, 10-12 May 2012.
[8] M. Priestley. A Science of Operations: Machines, Logic and the Invention of Programming. Springer, 2011.
[9] J.C. Reynolds. Letter to Edgar Daylight on 9 March 2012. (Reynolds kindly gave me permission to publish the contents of this letter.).
[10] H. Stoyan. Early LISP history (1956-1959). In LISP and Functional Programming, pages 299-310, 1984.
1 Comment
Turing's importance and Turing's paper OCN
Submitted by JRStern (not verified) on
I have read and studied Turing's paper OCN, and even though I already know what it says, yes, it is very hard to read rigorously, and I too wonder if more than a handful of people ever have. Nevertheless, it was the launching point for what became modern computers, thanks to a large number of people but most especially John von Neumann. We moderns who know modern computers can see in OCN the invention, even if the exposition is confused, and of course intermixed with certain claims about computable numbers. Untangling the almost accidental invention of computing mechanics from these other topics, would be a good start.