By: Eric Chaffin
In August 2017, Argo AI started testing its self-driving vehicles in the city of Pittsburgh. On January 9, 2018, one of those vehicles was involved in a traffic accident with a box truck. Two people were injured and sent to the hospital. What does this accident say about the safety of new autonomous technologies?
Self-Driving Car Crash Occurs in Pittsburgh
Argo AI, a start-up company backed by Ford, is in the business of developing self-driving cars, and deployed several in Pittsburgh last summer. This is the first accident involving one of their vehicles, and authorities are blaming it on human error.
The box truck and the Argo vehicle collided at the intersection of 16th and Progress streets, according to the Pittsburgh Post-Gazette. The box truck reportedly ran a red light, T-boning the Argo AI vehicle. The two people who were injured were in the self-driving vehicle. Both were reported in stable condition and were later released from the hospital. At the time of this writing, it’s unclear whether the vehicle was in self-driving mode at the time of the crash.
Uber Self-Driving Cars Also Involved in Crashes
Uber was the first company to start testing self-driving vehicles in Pittsburgh. In September 2016, they put several self-driving vehicles on the road in the city. In March 2017, the company suspended the program after a self-driving vehicle crashed in Arizona, but they resumed testing three days later after determining that it was the human driver who was at fault.
Again in September 2017, an Uber self-driving car was involved in a crash in Pittsburgh. This one occurred at the intersection of Sidney and Hot Metal streets, when a black Nissan Sentra collided with an Uber SUV. No one was injured. Uber again grounded its fleet for a few hours, but after determining that neither the software nor their driver was at fault, the company resumed testing.
Agencies Rushing Autonomous Vehicles to Market
In September 2017, federal highway safety officials released updated federal guidelines for automated driving systems, expressing support for further development of self-driving vehicles, and encouraging “best practices” for safety.
In November 2017, nonprofit organization RAND released a study suggesting that earlier adoption of these technologies—even before they’re perfect—will save more lives than if we wait until all the “kinks” are out.
Not everyone is convinced that faster is better, however, Author Jeffrey Mervis asks in his December 2017 article in Science Magazine, “Are we going too fast on driverless cars?”
“While developers amass data on the sensors and algorithms that allow cars to drive themselves,” the author writes, “the research on the social, economic, and environmental effects of AVs is sparse.” He suggests that driverless cars could increase congestion, energy consumption, and pollution, and could exacerbate urban sprawl. Software glitches could lead to “repeated recalls, triggering massive travel disruptions.”
And while government agencies assure us that self-driving cars will make the roads much safer than they are now, these vehicles are years away from perfection, and in the meantime, what happens when the technology is to blame for loss of human lives?
“[C]onventional wisdom holds that the public will be much less accepting of crashes caused by software glitches or malfunctioning hardware rather than human error,” Mervis writes.