I've tried to (mostly) let go of my worries pertaining to self-driving cars, a topic which I initially called self-crashing cars more than three years ago, and which made me examine some of Wittgenstein's thoughts for the first time. I wasn't an expert on Wittgenstein back then and I'm still not (really) one today either. Unfortunately, however, my safety engineering concerns — about attempts to capture everything in rules — seems to have been warranted; for here's an article that appeared in Wired a few days ago: "Uber's Self-Driving Car Didn't Know Pedestrians Could Jaywalk."
Incidentally, I'm currently reading one of Hubert L. Dreyfus's books: "Mind Over Machine" (1986, The Free Press), co-authored with his brother, Stuart E. Dreyfus. Hubert Dreyfus was influenced by the writings of Ludwig Wittgenstein, Martin Heidegger and Maurice Merleau-Ponty. His perspective, in short: not every form of (human) knowledge can be expressed in writing, let alone mathematically, with rules in a formalism, with a computer program. Quite some scholars have tried to make this general point in recent years, many of whom have very different backgrounds and opinions. One example is Edward A. Lee. For another, rather different, analysis, check out the writings of Giuseppe Longo.
If some of us (= computer scientists) still have hard feelings when coming across the name "Dreyfus," then perhaps we can read the following book instead:
- Brian Cantwell Smith, "The promise of artificial intelligence: reckoning and judgment," MIT Press, 2019.
I was eager to read this book (and I did with much satisfaction) because I was curious about what Smith had to say about self-driving cars. Earlier this year I watched one of his on-line talks in which he expressed a nuanced position, albeit only briefly. His book goes all the way.