Bloomberg News
Waymo Looks to Calm Doubts About Driverless Cars With Data Release
[Ensure you have all the info you need in these unprecedented times. Subscribe now.]
Autonomous vehicle developers have so far operated without shared industry standards for safety and with minimal oversight from state and federal governments. Waymo, saying that it is seeking to revive a conversation around these issues, on Oct. 30 released a trove of safety data for its fleet of robotaxis in Arizona. The voluntary disclosure comes less than a month after Waymo announced that it would begin offering fully driverless service to customers in Arizona and days before an election that could usher in a much tougher regulatory regime.
“It’s really just going to be a question for us of how strong some of the opposition will be to this technology, versus how enabling the legislative or regulatory approaches might be,” said George Ivanov, Waymo’s head of international policy and government affairs, during a conference call with reporters.
The safety report covers 6.1 million miles of automated driving in 2019 and 2020, including 65,000 miles without a safety driver at the wheel. The total mileage, Waymo said, represents more than 500 years behind the wheel for the average U.S. driver. It’s the first time that Alphabet Inc.’s self-driving car unit has made such information public. Waymo also released a 30-page document describing its methodology for assessing the safety of its vehicles.
RELATED: Daimler, Waymo Partner to Develop Autonomous Trucks
When Waymo announced its fully driverless service in Arizona earlier this month, Tesla CEO Elon Musk paid the rival a back-handed compliment on Twitter, calling Waymo a “highly specialized solution.” Tesla, which some consider Waymo’s primary rival in automated driving technology, was developing a “general solution” that was “capable of zero intervention drives,” he wrote. Waymo, in an uncharacteristic move, replied to Musk’s tweet with a photo of the steering wheels in its robo-taxis, which are labeled with the warning: “Please keep your hands off the wheel. The Waymo Driver is in control at all times.”
Yep, we specialize in zero intervention driving. Check out our steering wheel labels. pic.twitter.com/WpYopuS3SW — Waymo (@Waymo) October 8, 2020
“We’re two entirely different businesses, building two entirely different technologies,” Nick Webb, Waymo’s director of systems engineering, said of Tesla on the conference call. While Tesla, he noted, needs drivers at the wheel paying attention, Waymo is building vehicles that allow passengers to ride in the back seat while the front is empty.
Waymo reported 18 “contact events” involving its fleet over the span of the safety report, only one of which involved a fully driverless vehicle. In an additional 29 instances, Waymo concluded that there would have been a collision had a safety driver not intervened — using simulators to play out these “what if” scenarios. All 47 incidents involved errors or rule violations by other road users, according to Matthew Schwall, Waymo’s head of field safety. None was severe enough to be likely to cause critical injuries, the company said, though airbags were deployed in either the Waymo minivan or another vehicle on three occasions.
Although the 29 simulated incidents all involved aberrant behavior from other road users — including missed stop signs and failures to yield the right of way — Waymo’s safety drivers performed better in those moments than its robots at anticipating danger and preventing collisions. Waymo, Schwall said, looks carefully at such events and uses them to refine its driving software. The company cautioned against comparing its record with accident rates in the general population, both because its report includes minor accidents of the kind that rarely get reported in everyday life and because all of its 6.1 million miles were logged on a select set of suburban Phoenix roads with speed limits of 45 miles per hour or less.
Caitlin O'Hara/Bloomberg News
In a poll taken earlier this year and commissioned by the advocacy group Partners for Automated Vehicle Education, nearly half of respondents said they would never get into a self-driving taxi. In March 2018, a vehicle in Uber’s self-driving car unit struck and killed a pedestrian in Tempe, Ariz., leading the state to shut down the company’s testing program there. The incident created an implicit standard of zero fatalities for AV developers in the state and raised thorny legal and ethical questions about liability when humans and robots share driving responsibilities. Earlier this year, the Uber car’s safety driver was charged with negligent homicide.
Federal regulators have so far taken a hands-off approach to automated vehicle development. While California requires that autonomous vehicle developers report the total miles and the number of interventions by safety drivers for test cars in the state, there is no federal mandate for safety reporting. In its latest guidance, released in September 2017, the National Highway Traffic Safety Administration, under Secretary of Transportation Elaine Chao, put forward a “nonregulatory approach” that left the automotive industry to design its own best practices for testing and deployment.
In the absence of strong regulation, Waymo has taken a relatively cautious approach, limiting its ride-hailing vehicles to 100 square miles in suburban Phoenix and pulling safety drivers only after tens of millions of miles of testing. Tesla has been more aggressive, releasing increasingly ambitious versions of its advanced driver-assistance systems to the general public. Earlier this month, it began rolling out a beta version of what it calls “Full Self-Driving” to select customers. Videos of rattled Tesla drivers finding flaws in the system soon proliferated on social media. The software’s terms of service requires drivers to remain prepared to take control of their cars, and leaves them liable for collisions.
Want more news? Listen to today's daily briefing:
Subscribe: Apple Podcasts | Spotify | Amazon Alexa | Google Assistant | More