Submitted by egdaylight on
Dated:
The concerns raised by Brian Cantwell Smith in his classic 1985 paper "The Limits of Correctness" and the arguments advanced by Michael Jackson (e.g., in this booklet and on his website), are now re-amplified by Nancy Leveson in her CACM column, entitled: "Are You Sure Your Software Will Not Kill Anyone?". All three specialists convey the following messages. In Leveson's words:
[V]irtually all accidents involving software stem from unsafe requirements.
Formal verification [...] can show only the consistency of two formal models.
Complete discrete mathematical models do not exist of complex physical systems.
The last sentence in the previous quote is akin to `The Kopetz Principle', which I came across for the first time in Edward A. Lee's 2012 talk, entitled: "Verifying Real-Time Software is Not Reasonable (Today)".
Last month Ross Anderson lectured about the (unfortunate) advent of autonomous cars, the vulnerabilities of pacemakers, and the like — topics which are also extensively covered in a book by Marc Goodman. Now we have Nancy Leveson, of all specialists, express a preference for a human-in-the-control-loop approach.
Finally, and in line with Smith, Jackson and also Peter Naur's reflections, let me emphasize computer science's historic attachment to specifications & implementations. All aforementioned actors, instead, would like to focus on requirements engineering; that is, the transition from a real-world problem to a set of requirements. Dijkstra referred to this transition as "The Pleasantness Problem," which I have discussed here as well.