Submitted by egdaylight on
Dated:
As a safety engineer at Altreonic, specializing in formal verification and hazard & risk analysis, I am currently contributing to the design and implementation of a Software-Controlled Light Electric Vehicle, called Kurt (named after Kurt Goedel). Besides having a driver sit on the Kurt and steer it, the Kurt vehicle can also be sent steering requests over a wireless channel by a remote-control device. Hospitals, factories and cities in Flanders can — and hopefully will — benefit from several Kurt vehicles.
Fortunately, I am in the business of semi-autonomous driving, not fully-autonomous technology. I personally don't believe in the software safety of self-driving cars and would rather call such vehicles “self-crashing cars.” (Nor do I advocate semi-autonomous cars that are almost fully autonomous, i.e., that contain “too much” software.) Specifically, I do not want to live in a city where self-driving cars appear on the same roads that my children take to go to school, just like I don't want manually driven cars with drunken drivers in the vicinity of my family. My wish is, that, if self-driving cars are here to come and stay, then only a few cities in the world will experiment with them for, say, the rest of the present century. The main rationale is that it can take more than a decade before several software bugs in a computer have been detected and thus also in the computers that comprise a self-driving car. Moreover, no guarantee can be made at any time by anybody that all errors, broadly construed, will have been found.
Many people think that a self-driving car will be safer than a manually-controlled vehicle because the human driver is taken out of the loop. They forget, however, that the software of the self-driving car has been written by a human in the first place. So the human factor is very much present in self-driving cars too. To the best of my knowledge, computer scientists don't have, nor pretend to have, good statistics to suggest that the software in a self-driving car will be “good enough” for society at large. Contrast this with the public opinion that “statistics says” society will fare better when all drivers, drunk or not, have been replaced by software and computers.
While it is relatively easy for a professional software engineer to drastically improve the code of a drunken programmer, the professional cannot do the same with the software of a sober colleague, even though both engineers know very well that the software under scrutiny contains errors. Getting software to be absolutely 100% error-free before its deployment has always been and currently still remains computer science's grand challenge. Likewise, preventing some malicious user from hacking into somebody else's computer is, today, still much more an engineering activity than a science. Digital systems frequently receive software updates to “patch up” recently detected software vulnerabilities. The same will have to happen with software in self-driving cars unless, of course, we decide to abandon the silly idea of self-driving cars from the start.
Suppose, however, that you do want to and shall sit in a self-driving car C1 of manufacturer M in the near future. (I am okay with your dangerous lifestyle, provided you grant me and like-minded wimps the freedom to live in a city that prohibits you from visiting us in your self-driving car.) Suppose that prior to departure, your car has received a software update. Then it is very well possible that another self-driving car C2 of manufacturer M has recently experienced problems on the road. The passengers of that other car C2 could be injured or plain dead. Moreover, since your car can receive software updates, it can also receive malicious data from a hacker, e.g., a terrorist who programs your car via the Internet to autonomously crash into the terrorist's former employer. (The terrorist will survive the attack from his living room and be able to accomplish a similar assault the following week when the next software updates are scheduled by M's software vendor V.)
Now, if our future world contains sufficiently many self-driving cars and terrorists that can hack into a computer system — in fact, one hacking terrorist on the loose will suffice — then society at large will definitely not fare better in comparison to the world we live in now. A drunken driver on the road today is far less harmful than a well-chosen kind of software bug or security vulnerability in a self-driving car. The drunken driver will, in the worst case, cause local havoc only. A well-chosen kind of software problem, on the other hand, can lead to disaster on a global scale; i.e., global in time, in space, or both in time and space. A global disaster in time is, for example, the aforementioned terrorist who accomplishes one local car crash every month. An example of a global disaster in space is, for instance, one terrorist who exploits a software vulnerability in all self-driving cars of manufacturer M and software vendor V on the same day, leading to havoc on a large spatial scale. The terrorist pur sang will strive for disaster that is both global in space and time; e.g., by having multiple self-driving vans, carrying toxic chemicals over the world's bridges, fall into rivers so that each sunken van will affect multiple villages/cities and thousands of people for several months.
To some extent my criticism also holds for semi-autonomous cars because they, too, contain software that is simply impossible to get right before deployment. The question, then, which I will not delve into here, is which kinds of semi-autonomous vehicles are suitable for society at large and which are not. Furthermore, also drones, robots and other semi-to-fully autonomous digital systems that are designed to replace human control and which require the Internet (or some other kind of computer network) to get their functionality up and running can be scrutinized in ways similar to what I have presented above.
Given the hype surrounding self-driving technology, it seems that my wish to officially constrain its deployment for the entire century will not come true any time soon. I do hope, however, that every country in the world will have at least one city that prohibits self-driving cars. Unfortunately, living in such a city will not make my anxiety vanish completely. A self-driving car could easily be programmed to enter my safe haven and cause havoc to myself and fellow primitives of our digital age.
3 Comments
Hacking into a Tesla
Submitted by egdaylight on
A self-driving car can turn into a remote-controlled car by someone you don't know: http://keenlab.tencent.com/en/
flying drones in a workshop
Submitted by egdaylight on
I just attended a workshop on "how to write a good research proposal" and guess what happened. During the reception one of the organizers all of a sudden announced that some little drones would be flying through the crowd. "It's all safe," he said, "but don't stick your fingers into the drone, use your palms to push the drone away." The man "assured" us that if we take that precaution, then all would be safe. He added that he was an expert in safety engineering. So he felt he had to re-assure us even more.
Are we just supposed to accept this new way of life? I never asked for a drone to bump into my face. The workshop had nothing to do with drones.
Likewise, I don't want self-driving cars in the streets of my neighborhood, unless the public (a) votes on this and (b) is first well informed about the current state of the technology deployed in such cars.
a child
Submitted by egdaylight on
And what if all of a sudden a child enters the reception and he *does* stick his finger into the drone?