Submitted by egdaylight on
In 1962 Christopher Strachey reasoned about ALGOL60 programs in terms of machine efficiency and, hence, to a non negligible extent he worked from the machine to the language. Dijkstra, by contrast, insisted on reasoning from ALGOL60 to the machine. In Strachey's words:
I think the question of simplifying or reducing a language in order to make the object program more efficient is extremely important. I disagree fundamentally with Dijkstra, about the necessity of having everything as general as possible in all possible occasions as I think that this is a purely theoretical approach […] [1, p.368] [2, Ch.3]
In 1973, Strachey insisted on abstracting away from the finiteness of real computers when reasoning about ALGOL60 programs. First ensure program correctness in an exact mathematical setting, he said, before descending to the level of bit patterns inside the finite machine. A high level programming language such as ALGOL60
allows us to think about mathematical objects, such as integers, instead of their rather arbitrary representation by bit-patterns inside the machine. [3, p.2]
The luxury of a high level language, Strachey continued, was that it allowed the programmer to reason about variables instead of locations (or addresses) and functions instead of subroutines. When an ALGOL60 programmer writes a statement such as x := Sin(y+3) then he has in mind the mathematical functions sine and addition.
For Strachey, the finiteness of the machine only entered the picture afterwards.
It is true that our machines can only provide an approximation to these functions but the discrepancies are generally small and we usually start by ignoring them. It is only after we have devised a program which would be correct if the functions used were the exact mathematical ones that we can start investigating the errors caused by the finite nature of our computer. [3, p.2, original italics]
In his paper, which was in large part due to his work with Dana Scott, Strachey assigned “the usual mathematical” properties to his domains: Booleans T, Integers N, Reals R, etc. Likewise, he defined stored values V as T+N+R and, hence, allowed them to be arbitrarily large. For example, the infinitely long number Pi belongs to R. He then formalized ALGOL60 in terms of these arbitrarily large domains — for example, he defined vectors as L* = L + L2 + L3 + ... where L denotes the domain of locations — and, hence, appropriated ALGOL60 in this infinite setting [3, p.5,16,17]. Contrast this appropriation with Peter Naur's original intention of ALGOL60 and van Wijngaarden's formalization  in which the arithmetic of real numbers in ALGOL60 was explicitly defined for a finite setting:
Arithmetic of real quantities. Numbers and variables of type real must be interpreted in the sense of numerical analysis, i.e. as entities defined inherently with only a finite accuracy. Similarly, the possibility of the occurrence of a finite deviation from the mathematically defined result in any arithmetic expression is explicitly understood. No exact arithmetic will be specified, however, and it is indeed understood that different hardware representations may evaluate arithmetic expressions differently. The control of the possible consequences of such differences must be carried out by the methods of numerical analysis. This control must be considered a part of the process to be described, and will therefore be expressed in terms of the language itself.”  
Strachey subsequently used his formalism to compare and contrast a variety of programming languages, including ALGOL60. But, note thus that this was all happening under the umbrella of exact mathematics and, hence, in an arbitrarily large and potentially infinite setting. The finiteness was intentionally ignored.
In 1973 Dijkstra was, not unlike Naur and van Wijngaarden in the 1960s, viewing real arithmetic and programming languages in a finite setting. He did not want to follow Strachey and Scott in abstracting away from the finiteness. In Dijkstra's words:
We are considering finite computations only; therefore we can restrict ourselves to computational processes taking place in a finite state machine — although the possible number of states may be very, very large — and take the point of view that the net effect of the computation can be described by the transition from initial to final state. [EWD 367, p.0] [EWD 372, p.1]
The previous passage appears in both Dijkstra's draft and final texts: EWD 367 and EWD 372. But the latter also contains a critical remark for those who ignored the finiteness of programming problems. Note that Dijkstra attributed the infiniteness to Turing and Scott; that is, to mathematical logic in general.
(Since the classical work of A.M. Turing, and again since the recent work of D. Scott, one often encounters the tacid [sic] assumption that size and speed of today's computers are so huge, that the inclusion of infinite computations leads to the most appropriate model. I would be willing to buy that if — as in the case of “the line at infinity”, sweetly leading to projective geometry — the suggested generalization would clean up the majority of the arguments. Thanks to Cantor, Dedekind et al., however, we know that the inclusion of the infinite computation is not a logically painless affair, on the contrary! In the light of that experience it seems more effective to restrict oneself to finite computations taking place in a finite, but sufficiently large universe, thereby avoiding a number of otherwise self-inflicting pains. Those mathematicians that are so biased as to refuse to consider an area of thought worthy of their attention, unless it pays full attention to their pet generalizations, should perhaps not try to follow the rest of my argument.) [EWD 372, p.1, my bold]
On the one hand, Dijkstra, Parnas, and others frequently insisted throughout their careers not to ignore the finiteness of real machinery, even when reasoning abstractly about software . On the other hand, contrast the above passage with my post on the older Dijkstra in 1981 when he did exploit the infinite albeit for his work on mathematical methodology, not programming methodology.
During the 1970s, Dijkstra became increasingly interested in Hoare's axiomatic-based reasoning. Intuition, so dominant in his metaphors and analogies of the 1950s and 1960s, was making place for purely formal reasoning.
I apologize most sincerely for the last few pages of formal labour, which took me a few days to write down, while a greater expert in the propositional calculus probably could have done it much more efficiently. It is by now most urgent that we relate the above to our intuitive understanding of the recursive procedure: then all our formulae become quite obvious (and the painfulness of my formal proofs becomes frustrating!) [EWD367, p.10]
 Panel Discussion: `Efficient Processor Construction'. Proceedings of the International Symposium of Symbolic Languages in Data Processing. C.M. Popplewell (ed.) March 1962.
 E.G. Daylight, The Dawn of Software Engineering: from Turing to Dijkstra, Lonely Scholar (2012).
 C. Strachey, `The Varieties of Programming Language', Technical Monograph PRG-10, Oxford University Computing Laboratory, March, 1973. The monograph is a revised and slightly expanded version of a paper given at the International Computing Symposium at Venice 12-14 April, 1972.
 J.W. Backus et al., P. Naur (ed.), `Report on the algorithmic language ALGOL 60'. CACM 3.5, p. 299-314, 1960.
 A. van Wijngaarden, `Numerical analysis as an independent science', BIT 6, p. 66-81, 1966.
[EWD 367] E.W. Dijkstra, `EWD 367: On the axiomatic definition of semantics', 29 April 1973.
[EWD 372] E.W. Dijkstra, `EWD 372: A simple axiomatic basis for programming language constructs', 8 May 1973.
 E.G. Daylight, `Towards a Historical Notion of "Turing — the Father of Computer Science"'. Submitted to a special issue in the journal History and Philosophy of Logic.