Imagine as you drive down the road, you glance over to the driver next to you, but there is no driver. As confusion and panic start to overtake you, you realize there are cameras and sensors on the car and think, cool, it’s a driverless vehicle! But do those feelings of confusion and panic disappear? Are you safe driving next to an autonomous car? Wade Trappe, IEEE Fellow and professor of electrical and computer engineering at Rutgers University, discusses his concerns about the desire to quickly bring fully automated vehicles to the market. Trappe believes researchers, industries, and governments need to do more research, testing, and planning before society widely adopts autonomous vehicles.
“My biggest concern is that while the technology is promising, for technologies like this where there are significant safety risks, one should proceed with caution. I am worried that people want to adopt this technology too early, before ‘all the kinks’ have been worked out. And there will be many kinks. Who is to say that, after a couple more years, all of the ‘corner cases’ have been mapped out?” says Trappe.
These concerns partially stem from issues automated vehicles have experienced on public roads including crashes and traffic violations. So far, automated vehicles have not measured up to the standards Trappe expects for the engineering profession. “When we think of safety requirements for certain engineering applications (e.g. the construction of a bridge), we want the reliability to go out several decimal places (i.e. 99.999% reliability),” says Trappe.
“I am a firm believer in conservative engineering—the designs need to have cautiousness in their decisions—after enough failures have happened, the engineers will go back to design-in fixes and more cautious decisions that say ‘when I encounter a situation where there are two possible choices to make, here’s how to choose the one with the least risk/danger,’” says Trappe.
One component of the issue is the current functionality of the technology vs. consumers’ expectations. Trappe believes that the industry should first focus on driver-assist technology rather than rushing into driver-replace vehicles. And, this difference needs to be emphasized to consumers. “We saw, with the TESLA incident in China, that drivers want to believe that they can use the ‘driver assist’ technology in a ‘fully automated (no driver needed)’ manner,” says Trappe.
Trappe thinks that fully automated vehicles can safely come to the market, someday. But, first, more preparation is needed – and engineers are crucial to that preparation, while working in collaboration with governments and across the different industries involved.
More and more reliable in-vehicle signal processing is needed for vehicle automation, and this “signal processing is closely tied to more data and more trials. So, government needs to require more trials and higher requirements regarding quality. Meanwhile, industry needs to conduct more trials, collect more data, and conduct large-scale data analysis (where signal processing and machine learning comes into play),” says Trappe.
So when will fully automated vehicles be ready to come to the market?
“I think the near future (1-2 years) is not realistic as technology needs time to develop, for all the various corner cases to be mapped out,” says Trappe. “The technology will work eventually, but it needs more time… so my horizon would be more along the lines of 5-10 years. Perhaps there will be limited deployments sooner than that (e.g. deployments in cities that are naturally safer than other cities for such technologies to be tried out… for example, Manhattan has to be more complex than tiny-town USA…).”
Trappe believes that the technology is still developing and a slow integration into mainstream use is critical. This would allow engineers to build on early successes, and failures, and to develop favorable public opinion and buy-in of the technology.
“I think deployment will go in stages, with deployments in ‘easier to succeed towns than other harder to succeed towns’. Industries like this can’t handle too many negative press stories, so they will need to deploy first in scenarios where they will succeed… this will allow them to collect more data that can ultimately allow them to deploy in slightly harder settings… which will then allow them to deploy in slightly harder… etc.” says Trappe. “The minimum requirement to go to market would be success in a handful of increasingly difficult cities, and showing over long enough time that the failure rate is extremely low, and risks are comparably low,” says Trappe.
In addition to engineering the safety issues for automated vehicles, Trappe notes that there are other factors that need be address.
“There are sorts of legal/insurance questions to work out—such as who is liable under different conditions? Such a technology needs to have the basic legal framework started (at least) before it can really be deployed,” says Trappe.
“We need to move with caution, collect larger data sets, share them as a community so that we can all have better ground truths upon which to build algorithms for making automated vehicular decisions. (An agency like NIST could host such data for everyone to use),” notes Trappe. “And, we need users who are itching to use this technology to understand that it is a ‘driver-assist’ and should be used in that way long after companies say it is ‘driver-replace’. This means solving one of the hardest problems that has plagued technological innovation: getting users to read the user manual! In this case, though, it is really important that we solve this problem. We are, after all, dealing with human lives.”
Read more from Trappe in his IEEE Signal Processing Magazine article No Need for Speed: More Signal Processing Innovation is Required Before Adopting Automated Vehicles.