Simulation Testing’s Uncanny Valley Problem

No one wants to be hurt because they're inadvertently driving next to an unproven self-driving vehicle. However, the costs of validating self-driving vehicles on the roads are extraordinary. To mitigate this, most autonomous developers test their systems in simulation, that is, in virtual environments. Starsky uses limited low-fidelity simulation to gauge the effects of certain system inputs on truck behavior. Simulation helps us to learn the proper force an actuator should exert on a steering mechanism, to achieve a turn of the desired radius. The technique also helps us to model the correct amount of throttle pressure to achieve a certain acceleration. But over-reliance on simulation can actually make the system less safe. To state the issue another way, heavy dependence on testing in virtual simulations has an uncanny valley problem.

First, some context. Simulation has arisen as a method to validate self-driving software as the autonomy stack has increasingly relied on deep-learning algorithms. These algorithms are massively complex. So complex that, given the volume of data the AV sensors provide, it’s essentially impossible to discern why the systems made any particular decision. They’re black boxes whose developers don’t really understand them. (I’ve written elsewhere about the problem with deep learning.) Consequently, it’s difficult to eliminate the possibility that they’ll make a decision you don’t like.

When Poor UX Design and Glitchy Software Puts Lives at Risk

Photo credit by US Air Force/Steve Pivnick

As software becomes increasingly ubiquitous in all of our lives, the consequences of their inevitable failures grow as well. To the point: When the United States rushed to digitize medical patient records back in 2009, blinded by the glow of a $36 billion government carrot, it inadvertently set off a chain of events that has now, and in some cases forever, impaired countless lives.