Michelle Obama vs. the Gatekeeper

Dated: 

15 October 2016

Category mistakes and incomputability claims seem to go hand in hand. I have given one modern example in connection with programming languages and Michael Hicks and another one directly concerning the diagonal argument. In the present blog post I shall give one more example, based on an excerpt from my rejected POPL paper entitled: `Category Mistakes in Computer Science at Large.' Specifically, I shall zoom in on the “code improvement problem,” as discussed by Albert R. Meyer and Dennis M. Ritchie in their 1967 paper `The complexity of loop programs' [33].

 

Terminology

I distinguish between two abstractions:

  • abstraction A_{I}^{pos}, which allows for the representation of arbitrary large positive integers, and
  • abstraction A_{R}, which allows for the representation of infinite precision real numbers.

I use the term “programming language” as an abbreviation for “computer programming language” and always write “mathematical programming language” in full. 


Extract from my Rejected POPL Paper: “Flawed Incomputability Claims”

Let us turn to what I take to be flawed incomputability claims in the literature. Meyer & Ritchie's 1967 paper, for instance, is not solely about FORTRAN programs and LOOP programs, but, more generally, about the industrial “code improvement problem.” Is it possible to automatically improve the assembly code of a corresponding FORTRAN program? — that's the central question in the introduction of their paper. Citing from their introduction:

[C]onsider the problem of improving assembly code. Compilers for languages like FORTRAN and MAD typically check the code of an assembled program for obvious inefficiencies — say, two “clear and add” instructions in a row — and then produce edited programs which are shorter and faster than the original. [33, p.465, my emphasis]

As this excerpt shows, the authors specifically referred to the [computer] programming languages FORTRAN and MAD [and not to mathematical programming languages]. So, clearly, a “program” here refers to a computer program, not to a mathematical object.

To get a mathematical grip on the industrial “code improvement problem,” Meyer & Ritchie resorted to the theory of computability (and the theory of primitive recursive functions in the rest of their paper). Continuing with their introduction:

From the theory of computability one can conclude quite properly that no code improving algorithm can work all the time. There is always a program which can be improved in ways that no particular improvement algorithm can detect, so no such algorithm can be perfect. [33, p.465, my emphasis]

Here we see Meyer & Ritchie refer to the undecidability of the halting problem. The implication is that a “program” now refers to a mathematical object and, specifically, to a “Turing machine” (see the second footnote in their paper for a confirmation). In other words, a “program” here does not refer to, say, a finite state machine, and it definitely does not refer to a program residing in a computer.

Meyer & Ritchie subsequently conveyed some common wisdom from the theory of computation. It is possible to decide the halting problem for specific subsets of Turing machines(*); that is, even though the halting problem is undecidable in general, modeling computer programs as Turing machines can still lead to practical tools. (A modern example in this context is the work of Byron Cook et al. [9], which accepts both A_{I}^{pos} and A_{R}.) In their own words, regarding their code improvement problem:

But the non-existence of a perfect algorithm is not much of an obstacle in the practical problem of finding an algorithm to improve large classes of common programs. [33, p.465, my emphasis]

Here the word “program” thus still refers to a “Turing machine.”

My analysis so far shows that Meyer & Ritchie conflated their computer programs and their mathematical programs. Furthermore, Meyer & Ritchie only considered models of computation that comply with fundamental abstraction A_{I}^{pos}, namely that any variable V can represent an arbitrary large positive integer. Specifically, they only considered a mathematical program as synonymous to either a Turing-machine program (i.e., a partially computable function) or a LOOP program (i.e., a primitive recursive function). But, as we have seen, there are also models of computation that do not comply with abstraction A_{I}^{pos}.

Based on their mathematical analysis, Meyer & Ritchie ended their paper with an impossibility claim about computing practice:

If an “improvement” of the code of a program is defined as a reduction in depth of loops or number of instructions (without an increase in running time), then the proof of [the undecidability result expressed in] Theorem 6 also reveals that there can be no perfect code improvement algorithm. Thus the code improvement problem, which we noted in the introduction was undecidable for programs in general, is still undecidable for the more restricted class of Loop programs. [33, p.469, my emphasis]

A purely mathematical — and, hence, valid — interpretation of Meyer & Ritchie's findings would amount to stating that the theoretical variant of the “code improvement problem” is not only undecidable for Turing-machine programs in general, but also for the more restricted class of LOOP programs. My disagreement has to do with the conflation made between the theoretical variant of the “code improvement problem” and the actual, industrial “code improvement problem.” It is the latter which is “noted in the introduction” of their paper in connection with FORTRAN and MAD, not the former.

As a result then — and now a specific technical contribution of my analysis follows — it is still very well possible that somebody does end up defining the improvement of the code of a FORTRAN program as a reduction in depth of loops or number of instructions without an increase in running time and yet still obtains a perfect code improvement algorithm for these computer programs. (The adjective “perfect” can, however, only be quantified in a well-defined mathematical setting.) Meyer & Ritchie's paper only indicates that the person who aspires doing so will have to resort to a model of computation that does not comply with abstraction A_{I}^{pos}.

Footnote (*) I thank a reviewer for making the following observation: the Timed Automata model is another example that does not rely on finite abstractions and yet is complete and decidable.

 

Michelle Obama Comes to the Rescue

That, then, was another excerpt from my rejected POPL paper. Perhaps the POPL Gatekeeper agrees with my analysis — because s/he says it “is only a recapitulation of dialogues familiar to everyone working in verification and related areas” — and perhaps s/he therefore considers my specific technical contribution to be trivial. I believe my analysis is very trivial indeed, but only in retrospect. Meyer and Ritchie's main claim is similar to that of Dr. X who, briefly put, says that he has mathematically proved that certain systems cannot be engineered. And that incorrect claim, in turn, shows some resemblance to Michael Hicks's views.

I honestly don't know whether the POPL Gatekeeper is really convinced that Meyer and Ritchie have incorrectly projected their theoretical findings onto programming practice. Here's what the POPL Gatekeeper has to say in his/her own words about my rejected paper:

I believe the main points of this paper are already widely agreed-upon in the POPL community and are used appropriately throughout the literature, even in the (relatively recent) papers that are quoted as evidence to the contrary. What I expect is that the author of this paper is an outsider to the POPL community and has misunderstood its conventions for writing about formalism.

The POPL Gatekeeper has made a similar remark about Dave Parnas who had the audacity to scrutinize the writings of some honorable POPL members in the Communications of the ACM.

[Continued:] The conclusions may be interesting as a data point on how outsiders may misinterpret formal-methods papers, though it's not clear the situation here is any worse than it is for the average technical field.

Four quick points:

  1. Is the Gatekeeper suggesting that I first research whether “the situation” is worse in computer science than in other disciplines and that I then come back to the POPL gate if (and only if) the answer is affirmative? (My educated guess is that researchers in more mature disciplines make less category mistakes.) At least the Gatekeeper is implicitly acknowledging that “the situation” can be improved. That's a start.
  2. A researcher who does consistently distinguish between the model and the technical artefact is always preferable to someone who conflates the two. Check out the Parnas-Chaudhuri exchange and judge for yourself.
  3. I'm afraid I'm not the kind of outsider the Gatekeeper was expecting. I talk to formal methodists every single day. I even used to be a symbol chauvinist but at some point in my career I met people who were asking the right questions.
  4. I'm sure many of my findings are obvious to the Gatekeeper but I'm also certain that s/he has not grasped all implications (see this).

Continuing with the Gatekeeper's words:

I think this paper should not be accepted, because it is only a recapitulation of dialogues familiar to everyone working in verification and related areas, and yet at the same time the paper fails to "do no harm," taking quotes out of context to accuse researchers of confusion.

As Michelle Obama has said recently, “when they go low, we go high.”

[Continued:] As a convincing paper should, this one presents evidence for its thesis that the credo above is *not* already in wide use, explaining how research papers are giving misleading impressions about what their results prove. The evidence is entirely in the form of historical anecdotes and quotes from a handful of papers.

Historical anecdotes? A handful of papers? Is the Gatekeeper suggesting that I first find many more papers (which really is no difficulty at all) and that I then come back to knock on the POPL gate? What about all the other examples presented in the books of Timothy Colburn and Donald MacKenzie — books cited in my paper? Moreover, almost all articles discussed in my rejected POPL paper are authored by prominent computer scientists.

[Continued:] There are conventions for how such papers are written and what is assumed about their contexts. The author of this paper seems to misunderstand the conventions, leading to sightings of fundamental confusions where none exist.

Fortunately I belong to a second — or even a third — generation of software scholars who doesn't buy this rhetoric. I stand on the shoulders of James Fetzer, Peter Naur, Timothy Colburn and others who have already made complaints similar to those that I am making here. My small contribution here is that I am also

  1. discussing technical claims pertaining to computability theory,
  2. following Raymond Turner's very recent publications about technical artefacts, and
  3. using primarily POPL case studies in an attempt to reach out to the POPL community.

[Continued:] At most, these new observations call for considering *educational* strategies for informing a broader audience about these conventions; researchers in the field are already well aware of them and draw the correct conclusions from papers.

So now the POPL Gatekeeper is at least admitting something. My clarifications are potentially beneficial to educators and outsiders; i.e., researchers who do not belong to the elite. I'm afraid the last sentence in the previous quote is false and, as I've explained before, I don't have any reason to believe that the POPL Gatekeeper actually understands the implications of consistently distinguishing between computer programs and mathematical programs. S/he has, remember, sidestepped my observation that multiple POPL members still tell me that it *is* possible to fully verify a digital *system*.

 

References

[9] B. Cook, A. Podelski, and A. Rybalchenko. Proving program termination. Communications of the ACM, 54(5):88–98, May 2011.
[33] A. Meyer and D. Ritchie. The complexity of loop programs. In Proceedings of the ACM National Meeting, pages 465–469, 1967.

Tags: