AI Online

Ai INNOVATION, SINCE 1895

In a “Retro Report” published in the New York Times on September 14, Clyde Haberman correctly reminds us that designing automation for automobiles needs to carefully consider lessons learned (often the hard way) when applying other technology to automobiles.  But in designing software for such automation, we need also consider lessons learned (again, often the hard way) in other domains.

 

Fundamentally, we have an interaction between man and machine.  No matter how sophisticated the controls, there are times when the operator must take over, either because of a situation unforeseen by the designers or because the operator inadvertently asked for something unintended.  When that happens, the operator is transitioning from an observational to a controlling role. That transition can be dangerous.  Exactly such transitions have contributed to many airplane accidents in recent years, most recently the Asiana crash at SFO.

 

Every pilot is taught in their very first lesson of the importance of keeping precise airspeed control during landing.  Yet, in the Asiana crash, three professional pilots were monitoring the landing and none of them did anything about, or even noticed, an airspeed drop over a period of many minutes.  How could that have happened?  The answer isn’t in the domain of aviation or computer engineering, but rather in the domain of psychology and physiology.

 

The problem is worse in the automotive industry.  Transitioning between machine and manual control requires maximal alertness, but automation may be used precisely when the driver’s alertness level is lowest.  These concerns have been with us since the first cruise control, but get worse as we automate more and the lessons we learn from aviation are that we need to move very carefully here.

 

Other lessons from aviation are more encouraging.  We seem to hear almost weekly about the latest software “glitch”. Everybody experiences weird behavior of phones and their apps.  It’s easy to assume things have to be this way.  But they don’t.  There’s never been a fatality due to an error in aviation software, though there have been some close calls.  That hasn’t been luck.  An extensive methodology of “certification” produces a high degree of confidence that the software performs as designed.  Roughly speaking, the process involves writing huge numbers of tests, each based on a requirement that the software must meet, that verify the proper operation of each line of code and each decision point.  This is a very formal process and checked by independent reviewers.  It’s also very expensive: it costs hundreds of millions of dollars to certify the software for a modern airplane.  Since there are far more cars than airplanes, and people spend far more time in cars than airplanes, the process used for automotive software should be more stringent than that for avionics.  But it isn’t.

One lesson we should learn from the aviation domain is that it needs to be.

 

We can now use mathematics to help.  Although it’s not practical to prove large pieces of software “correct” (among other reasons because the degree to which being “correct” would have to be specified is impractically detailed), there are definitely things that we can accomplish through formal methods.  We can prove the software doesn’t violate rules of the programming language (such as referencing undefined data or dividing by zero).  We can prove components of the software do as they’re supposed to (e.g, a routine that purports to compute a square root).  And we can list important properties of systems, such as the throttle always being closed when the brake has been pressed, that we can prove.  Using these techniques not only leads to more reliable software, but reduces testing costs because it significantly reduces the number of tests needed.

 

Security is the final issue.  And proofs are the only reliable way of securing software.  Testing isn’t good enough because we’re only testing against known vulnerabilities.  However, merely doing the proofs that the software doesn’t violate language rules is enough to detect the vast majority of known security roles and would have prevented, among many others, the exploit known as “Heartbleed”.

 

Fundamentally, security is about the boundary where the software interacts with its environment.  Software is “secure” when the only communication that can occur across that boundary is that which we desire.  And we can write those as conditions and prove that the software meets it.

 

Using lessons both from the past of automotive design and the present of design in other domains, such as avionics, will give us the best chance of making the cars of the future safe, secure, and driver-friendly.

 

About Richard Kenner

Richard Kenner is a co-founder and Vice President of AdaCore. He was a researcher in the Computer Science Department at New York University from 1975 through 1998. During that time, he worked with the SETL Project, a research project in high-level programming languages, the PUMA Project, which designed and built an emulator for the CDC 6600 in the late 1970’s, and for many years with the Ultracomputer Project, which did research on highly-parallel computer systems. At Ultra, he worked on hardware design, VLSI design, and compilers. As part of the latter work, he was the maintainer of the GCC compiler (on which the GNAT technology is based) for the Free Software Foundation for the 2.5.x through 2.8.x releases. He published papers both on compiler design and VLSI design of a switching network for Ultra and was the designer of the interface between the GNAT front end and GCC.