A common fantasy for transportation enthusiasts and technology optimists is for self-driving cars and trucks to form the basis of a safe, streamlined, almost choreographed dance. In this dream, every vehicle – and cyclist and pedestrian – proceeds unimpeded on any route, as the rest of the traffic skillfully avoids collisions and even eliminates stop-and-go traffic. It’s a lot like the synchronized traffic chaos in “Rush Hour,” a short movie by Black Sheep Films.
Today, autonomous cars are becoming more common, but safety is still a question. More than 30,000 people die on U.S. roads every year – nearly 100 a day. That’s despite the best efforts of government regulators, car manufacturers and human drivers alike. Early statistics from autonomous driving suggest that widespread automation could drive the death toll down significantly.
There’s a key problem, though: Computers like rules – solid, hard-and-fast instructions to follow. How should we program them to handle difficult situations? The hypotheticals are countless: What if the car has to choose between hitting one cyclist or five pedestrians? What if the car must decide to crash into a wall and kill its occupant, or slam through a group of kindergartners? How do we decide? Who does the deciding?
So far, our transportation system has evolved to be operated by humans, who are good at following guidelines but often interpret them to properly handle ambiguity. We stop midblock and wave a pedestrian across, even though there’s no crosswalk. We cross the double yellow line to leave cyclists enough room on the shoulder.
Improving our transportation system to take advantage of the best of machines and humans alike will require melding ambiguity and rigid rules. It will require creating rules that are, in certain ways, even more complex than what we have today. But in other ways it will need to be simpler. The system will not only have to allow automated drivers to function well: It must be easily and clearly understood by the humans at its center.