Wittgenstein — cars — Turing

Dated: 

1 September 2016

Researchers have the responsibility of making clear the limits of their understanding about technology, including the software that is soon to be deployed in self-driving cars. Just like most people do not want conventional cars with drunken drivers in the vicinity of their beloved ones, I shall give arguments (which complement my previous arguments: here and follow-up here) to eschew self-driving cars as well.

My blog posts on self-driving cars constitute an article that I have written and submitted to journal X for peer review. Fortunately, journal X has granted me the right to keep these posts on-line.

In my opinion, self-driving cars should be tested in well-delimited areas for the rest of the present century, before they are “let loose” on our roads. Unfortunately, given the extremely large investments in technology for fully-autonomous vehicles, it seems that self-driving cars are here to come and stay. If so, then I foresee the following publication in a few decades:

During the early years of research in self-driving cars, considerable progress created among many the strong feeling that a working system was just around the corner. The illusion was created by the fact that a large number of problems were rather readily solved. It was not sufficiently realized that the gap between such output and safety-critical real-time software proper was still enormous, and that the problems solved until then were indeed many but just the simplest ones whereas the “few” remaining problems were the harder ones—very hard indeed.

This passage is my slight modification of what Yehoshua Bar-Hillel wrote in his famous 1960 report `The Present Status of Automatic Translation of Languages' in which he retrospectively scrutinized the field of machine translation [1]. Bar-Hillel's original words also appear in the following philosophical writings of Hubert Dreyfus: What Computers Can't Do [3] and What Computers Still Can't Do [4], i.e., books that changed the course of 20th-century research in Artificial Intelligence (AI). As Dreyfus explains, Marvin Minsky, John McCarthy, and other AI researchers implicitly took for granted that common knowledge can be formalized in facts. Following Plato, Gottfried Leibniz, and Alan Turing, many computer scientists blindly assume that a technique must exist for converting any practical activity, such as learning a language or driving a car, into a set of instructions [4, p.74]. Ludwig Wittgenstein, by contrast, opposed a rational metaphysical view to our (now digital) world. In explaining our actions, Wittgenstein would say, we must always

sooner or later fall back on our everyday practices and simply say `this is what we do' or `that's what it is to be a human being' [4, p.56-57].

Wittgenstein and computer scientists like Peter Naur and Michael Jackson would object to the supposition that human behavior can be perfectly replaced by man-made technology [2, 6]. They would disagree with a number of prominent people who have recently raised concerns that AI systems have the potential to demonstrate superintelligence. While a human will circumvent a big rock but not a crumbled-up piece of newspaper lying on the road, a self-driving car will try to drive around both [5]. Observations like these clarify why HAL, the superintelligent computer in 2001: A Space Odyssey, remains science fiction. Why would mankind now all of a sudden be able to develop a HAL that drives on our cities' roads, circumventing pedestrians, bicyclists, and anything else that humans throw at it [7]?

People in the automotive industry who have patience with my philosophical reflections often end up falling into Dreyfus's trap by saying the following:
Just add a new rule to the car's software to circumvent a "corner case." Keep doing this for each corner case.
An example of a corner case is the crumbled-up piece of newspaper. If we want the car to drive over the newspaper, we need to add extra sensors and functionality to the vehicle and add a new rule to the vehicle's controller. (The extra hardware makes the car more expensive and the additional rule makes the software more complex.)
*
The problem with this "add a new rule" solution is that the size of the software grows linearly with each rule. Instead, software should be a model of the real world which is much smaller than the real world itself. That is, we want software to be a compressed representation of all the things it entails, not an explicit list of rules.
*
I am willing to change my position on self-driving cars if good arguments are brought forward, preferably by philosophically-inclined computer scientists. What worries me is that:
  • Self-driving cars are already being deployed in the streets in multiple states in the USA (and I think the public has the right to be well informed about what is really going on).
  • I have yet to come across a paper written by a self-driving-car advocate who addresses the philosophical issues raised by Dreyfus. If you, the reader, think that Dreyfus's arguments are easily dealt with (in connection with safety-critical software, not Internet apps), then please consider writing a paper on this matter or reply to this blog post.
I am aware that machine learning is part and parcel of self-driving car technology and I would love to learn more about this. But I have also been told by more than one professional that "good old fashioned" rule-based learning is part of vehicle technology too.
*
REFERENCES
*
[1] Y. Bar-Hillel. The present status of automatic translation of languages. In F.C. Alt, editor, Advances in Computers, volume 1, pages 91-141. Academic Press, New York and London, 1960.
[2] E.G. Daylight. Pluralism in Software Engineering: Turing Award Winner Peter Naur Explains. Lonely Scholar, Heverlee, October 2011.
[3] H.L. Dreyfus. What Computers Can't Do: The Limits of Articial Intelligence. Harper/Colophon, New York, 1979. Revised edition (the first edition appeared in 1972).
[5] L. Gomes. Driving in circles: The autonomous Google car may never actually happen. www.slate.com. Accessed on 2016-Aug-17.
[6] M.A. Jackson and E.G. Daylight. Formalism and Intuition in Software Development. Lonely Scholar, Geel, August 2015.
[7] M. Konczal. The phenomenology of Google's self-driving cars. http://rooseveltinstitute.org/phenomenology-googles-self-driving-cars/, October 2014. Accessed on 2016-Aug-17.

Tags: 

6 Comments

machine learning versus explicit lists of rules

It is not always necessary to add a new rule in the on board software. Perception in AVs is handled for a large part by machine learning methods. A new example can be added to the training set without making the model larger. If the training dataset grows by an order of magnitude, it is a good idea to also increase the size of the model a bit to benefit from a more favorable point in the bias-variance trade-off, but usually this expansion is quite sublinear.

Machine learning models (usually) already are a form of the 'compressed representation' that you want.

(Rule learning is also a form of machine learning and indeed results in lists of rules, which may become longer when you have more examples, but again they won't necessarily grow linearly. I don't know if anyone uses old-fashioned rule learning for AVs.)

Wittgenstein's argument in context

+1 on the previous commenter.

Our general desire to make AI more aware of the world as a whole instead of a set of rules notwithstanding, let's remember that adding rules to the training set for every new situation is more or less what happened with evolution. Our own human current ability to reason and avoid obstacles is the result of millions of years of adding new rules to the training set and see the neural network evolve as a result.

With computerized models we have an advantage over evolution in that we can add new rules and evolve the model much faster than what biology would allow us.

corner cases remain problematic, also with deep learning

Deep learning has similar problems to those encountered with GOFAI:

"Or take a fluid scene like a dinner party. A person carrying a platter will serve food. A woman raising a fork will stab the lettuce on her plate and put it in her mouth. A water glass teetering on the edge of the table is about to fall, spilling its contents. Predicting what happens next and understanding the physics of everyday life are inherent in human visual intelligence, but beyond the reach of current deep learning technology."
--- cited from the aforementioned newspaper article (see the previous comment in this thread)

These scenarios are corner cases which --- following Wittgenstein --- differentiate the behavior of humans from the behavior of engineered systems.

waking up

These kinds of articles on self-driving cars were very hard to find just 18 months ago:

+ from The Guardian

+ from The Conversation