Here's an abstract, entitled: The Turing Machine as a Boundary Object: Sorting Out American Science and European Engineering, co-authored by Erhard Schüttpelz, featuring Marvin Minsky and Edsger Dijkstra in the late 1960s and early 1970s. To be presented this summer in London at the 11th British Wittgenstein Society Conference: Wittgenstein and AI.
- I've recorded my talk, which you can find here.
- Based on a follow-up Q&A session, I have received the following suggestion (if not correction) by a Wittgensteinian scholar: the words "complete permeability" are to be substituted by "potentially complete permeability," for the long list of permeability statements -- which we obtained from the permeability table -- is not exhaustive. According to the scholar, the whole point of family resemblance is that the listing is open ended.
Update 2022-August-15: Questions obtained via email:
- Surely an infinite abstraction is occassionally very useful? Yes, as illustrated in an exchange between Parnas & Chaudhuri et al. in Chapter 5.2 in my Turing Tales book (and elsewhere on this blog). But the infinite abstraction cannot fully cover the 'computer programming' practice under scrutiny. (Think of lazy evaluation and garbage collection .... somewhere, e.g. at runtime, the infinite abstraction actually breaks down. And _this_ knowledge is often overlooked.)
- You seem to be suggesting that a Boundary Object (BO) and family resemblances are mutually exclusive; can you clarify? My current understanding: if I give you some family of members (which can grow the next day) and you tell me today that you can definitely capture the essential commonality in,say, some tenet, then you have -- or come close to having -- a BO, but you don't have family resemblances as defined by Wittgenstein. Concerning "family resemblance," here's a quotation from Oskari Kuusela's "Wittgenstein's Reception of Socrates" in Brill's Companion to the Reception of Socrates (ed. Christopher Moore):
- "Wittgenstein ... rejects what Socrates seems to accept without any argument, namely, that the unity of concepts ... can always be captured in terms of an overarching definition that determines the scope and bounds of a concept in terms of features common to all cases that fall under it. While this assumption of simple conceptual unity, as we can call it, was accepted by Wittgenstein in the Tractatus, his later philosophy is characterized by its abandonment qua assumption. For it is not that a concept could not have this kind of unity. We just cannot assume concepts always do. As he explains in a discussion from 1941, making the point with reference to the notion of meaning (rather than concept): "When I wrote [the Tractatus], I had Plato's idea of finding the general idea lying behind all particular meanings of a word. Now I think of the meanings as like the fibres of a rope. One may run the whole way through, but none may" (PPO 387). Thus, according to Wittgenstein, there need be no common meaning or underlying idea that unifies all cases of the use of a word to express a particular concept. It is not part of the concept of a concept that conceptual unity depends on something shared by all cases that fall under it. Other modes of conceptual unity, such as family resemblance, are possible ..." (p. 5 from a preprint, original emphasis)
- For those of us who are less acquainted with your research, can you please clarify what all of this is about? The very idea, that, whatever an engineer will ever build -- even in, say, 100 years from now -- we, logicians, already know what *that* will be (i.e., a universal TM at best), is so enormously fantastic -- as in 'fantasy'. (That's not unlike with Russell, when he championed his logicism, with strong disagreement coming from Poincare and Hobson.) Engineers deal with and know the *physics* of a cell phone, a laptop, etc. So, I take *physics* to sit in the driver's seat, not symbols in logic. In this regard, see Edward A. Lee's recent talk. Concerning Michael Sipser's quotation, discussed in my talk, here's a Wittgensteinian response: "An unsuitable type of expression is a sure means of remaining in a state of confusion. It as it were bars the way out" (Wittgenstein 1953: paragraph 339). Concerning the word "science" in "computer science," here's the crux of science, as explained by Carlo Rovelli in his 2016 book, entitled Reality Is Not What It Seems (Penguin Books): "The answers given by science, then, are not reliable because they are definitive. They are reliable because they are not definitive. They are reliable because they are the best available today. And they are the best we have because we don't consider them to be definitive, but see them as open to improvement. It's the awareness of our ignorance that gives science its reliability" (p. 230-1).
Update 2022-August-9: Quoting from Oskari Kuusela's 2019 book, entitled Wittgenstein on Logic as the Method of Philosophy (Oxford University Press):
- The older Wittgenstein. "[J]ust as an idealized picture of reality, such as a map, can make it clearer how things relate to one another in reality, similarly an ideal language can make logical relations clearer. But an idealized picture of reality does not provide us with a standard for how reality ought to be. [Unless the neo-Russellian tenet holds, which is not the case.] Requiring reality to conform to an idealization is to do something in addition to idealizing, and this is something very different from clarifying aspects of reality by way of comparing reality with an idealized picture. By contrast, Wittgensteinian logical clarification lets reality (including actual uses of natural language) be whatever it is, in all its messiness, even though logic needs to portray reality in less messy terms for the purpose of clarification [...] Note also how the requirement that reality should correspond to a logical idealization involves an ontological claim that is foreign to science. Wittgensteinian logical idealization corresponds in this regard to scientific idealization, neither of which implies ontological claims [...]" (p. 235, my emphasis)
- Wittgensten, switching from the young to the older intellectual. "Essentially, to turn the examination around is to abandon the approach that treats the ideals of simplicity and exactness as requirements that thought and language must meet, and to adopt an alternative approach that understands exactness and simplicity as part of logic's mode of examination, and as characteristic of its methods of clarification [...] A crucial issue is how to avoid falsely claiming that the uses of language are more exact or simple than they are. Importantly, according to Wittgenstein, this falsification arises from logicians presenting the idea that language is governed by strict and precise rules, as if making an assertion about actual language use. But this description of the problem already hints at its solution. The problem can be avoided by putting forward the strict and precise rules, not as a claim about language, but as a particular way in which logic, for its purposes, seeks to describe the uses of language [...]" (p. 126, my emphasis)
Update 2022-August-11: Quoting from Paul M. Livingston's 2012 book, entitled The Politics of Logic (Routledge):
- "The normativity that we expect from, and regularly find, in the actions of computers is not simply an outcome of their actual construction or their "ideal" architecture, but is rather possible only on the basis of the kinds of "agreement" that first enable us to engage in shared practices at all." (p.170)
- We, logicians & computer scientists, agree at the outset that it makes sense to say, that, my laptop is "Turing complete." (Engineers have their reasons to take that statement as nonsense.)
- It is also common in computer science to agree on the following convention: my laptop implements a computable function, but not a non-computable one. (But a bit of thinking reveals that not even the identity function is implemented by my laptop, even when I abide by the computer science conventions. Hint: the memory space needed to store the input is part and parcel of what is called a "computation" in the computer science textbooks.)
- "Here ... the ability of such instruments and devices to determine the distinction between correct and incorrect results ... does not and cannot rest entirely on anything given or wholly determined by the actual construction of the instruments and devices themselves. It depends, instead, on the preexisting practices, techniques, and ways of life in which these instruments and devices, as well as the interpretation of their results and their implications, have their normal roles." (p. 169, original emphasis)
- The academic practice in computer science is to treat my laptop as a computable function. (Actual practices of programming in industry don't do this or only to a limited extent. The function abstraction breaks down in real life and that's when & where the computer science textbooks become useless.)