September 14, 2020
Quoted In

(Inside Science) -- A driverless car isn't driven by a person but is controlled by a system of sensors and processors. In many countries, tests of autonomous driving have been happening for years. Germany wants to permit driverless cars across the country by 2022. As the technology develops, researchers are continuing to explore ways to make the algorithms used to make driving decisions better, and roadways safer. Bryan Reimer is quoted,

“Society hasn’t answered, how safe is safe enough?” he said. The premise in many academic papers is that driverless cars can be adopted once they can be trusted to drive more safely than humans do. But Reimer says that doesn’t go far enough. “We are not ready for robotic error to harm people,” he said. It’s important to define what is appropriately safe. Different countries are still trying to wrestle legal standards to fit a future driverless world.

Robotic error will differ from human error. They're not going to fall asleep or get distracted when a text message pings. But they will err in other ways, for example mistaking a blowing piece of trash for a person. “Machine intelligence is really good at black-and-white decisions and getting better at others, while humans are adept at making decisions in gray areas,” said Reimer, who gave a TedX talk called “There’s More to the Safety of Driverless Cars than AI.” 

“We need to be thinking less algorithmically,” said Reimer. He points to the aviation industry as an example: Decades ago, there were plans to automate the pilot out of the cockpit, but the industry soon discovered that wasn’t the best plan. Instead, they aimed to couple human expertise with automation. “In airplanes, people work with automation and leverage it and take on new responsibilities,” Reimer explained. “That’s what has driven aviation safety to where we are today.”

So how safe is safe enough? Reimer says that it’s about creating a culture of safety. To start, anything that is shown to be substantively safer, even a 5%-10% improvement, would be a starting point, but is not going to be acceptable in the long haul. Instead of a safety standard, the goal should be a continual process and improvement -- something akin to the way the FDA certifies new drug therapies or medical devices. “Anything that is safe enough today is not safe enough tomorrow,” he said.

Inside Science