An overhaul to Tesla’s Autopilot webpage might represent the clearest acknowledgment yet that the company has failed to deliver on Elon Musk’s ambitious vision for a self-driving future.
“You will be able to summon your Tesla from pretty much anywhere,” Musk wrote in July 2016. “Once it picks you up, you will be able to sleep, read or do anything else enroute [sic] to your destination.” Indeed, he predicted, Tesla customers with full self-driving capabilities will be able to have their cars join a ride-hailing network in order to “generate income for you while you’re at work or on vacation.”
In January 2016, Musk predicted that Tesla cars would be able to drive autonomously coast to coast “in ~2 years.”
Needless to say, this hasn’t happened. And after more than two years of peddling unrealistic visions of Autopilot’s future, Tesla’s Autopilot page has finally been updated to reflect that reality.
The page’s headline has changed from “Full Self-Driving Hardware on All Cars” to “Future of Driving.” A sentence about Tesla’s ride-sharing network has been deleted. The “Full Self-Driving” section now includes a disclaimer that “future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience.”
In other words, despite Musk’s bluster over the years, Autopilot is still just a driver-assistance system. And it will continue to be just a driver-assistance system for some time to come.
Musk still wants to gradually improve the safety of this driver-assistance system. Eventually, the technology could become so good that it will no longer require human oversight.
But there’s reason to doubt that this strategy is going to work. More importantly, there’s reason to worry that it could get people killed.
Tesla is clinging to an old conventional wisdom
In 2014, the same year Tesla started shipping the first generation of Autopilot hardware, the Society of Automotive Engineers published a five-level taxonomy of autonomous driving systems that envisioned driver-assistance systems (known as “level 2” in SAE jargon) gradually morphing into fully autonomous systems that could operate without human supervision (levels 4 and 5).
But the last five years have seen a dramatic shift in industry thinking. Most companies now see driver assistance and full self-driving as distinct markets.
No company has done more to change industry thinking here than Google, whose self-driving project was spun off as Waymo in 2016. Around 2012, Google engineers developed a highway driving system and let some rank-and-file Googlers test it out. Drivers were warned that the system was not yet fully autonomous, and they were instructed to keep their eyes on the road at all times.
But the self-driving team found that users started to trust the system way too quickly. In-car cameras showed users “napping, putting on makeup and fiddling with their phones.” And that created a big safety risk.
“It’s hard to take over, because they have lost contextual awareness,” Waymo CEO John Krafcik said in 2017.
So Google scrapped plans for a highway driver assistance product and decided to pursue a different kind of gradualism: a taxi service that would initially be limited to the Phoenix metropolitan area. Phoenix has wide, well-marked streets, and snow and ice are rare. So bringing a self-driving service to Phoenix should be significantly easier than developing a car with self-driving capabilities that work in every part of the country and all weather conditions.
This approach has some other advantages, too. Self-driving cars benefit from high-resolution maps. Gathering map data in a single metro area is easier than trying to map the whole world all at once.
Self-driving cars also benefit from lidar sensors, and the best ones cost thousands—if not tens of thousands—of dollars each. That’s too expensive for an upgrade to a customer-owned vehicle. But the economics are more viable for a driverless taxi service, since the self-driving system replaces an expensive human taxi driver.
Over the last three years, most other companies working on self-driving technology have followed Waymo’s lead. GM bought a startup called Cruise in 2016 and put it to work developing an autonomous taxi service in San Francisco. Ford made a similar bet on Argo AI in 2017—the company is now developing autonomous taxi services in Miami and Washington DC.
Volkswagen and Hyundai have deals with Aurora—a startup co-founded by Chris Urmson, the former leader of the Google self-driving project—to develop fully autonomous taxi services. Technology companies like Uber and Zoox are planning to introduce autonomous taxi services.
Tesla’s business model locks it into the old approach
Tesla, meanwhile, has stubbornly pushed forward with its original strategy. For more than two years, Tesla charged customers $3,000 or more for a “full self-driving” package. But progress has been slow. And that has put Tesla in a bind. Abandoning the old strategy would likely require refunding customers who paid for the Full Self-Driving package—which would be both embarrassing and expensive.
Instead, Tesla’s solution has been to move the “full self-driving” goal posts.
“We already have full self-driving capability on highways,” Musk said during a January earnings call. “So from highway on-ramp to highway exit, including passing cars and going from one highway interchange to another, full self-driving capability is there.”
Obviously, this statement comes with a big asterisk: the driver still has to supervise the car to make sure it doesn’t crash.
Last week, Tesla announced a reshuffle of the Autopilot price structure that reflects this new, more-generous definition of full self-driving. Previously, driver-assistance features were sold as part of Tesla’s “Enhanced Autopilot” tier that cost $5,000. Customers could pay an additional $3,000 for the “Full Self-Driving” package.
But people who paid for this package didn’t get any extra functionality. They were waiting for, well, “full self-driving”—a car capable of driving itself without human supervision.
The new pricing structure defines full self-driving differently. The ability to navigate freeway interchanges, for example, was shifted from “Enhanced Autopilot” in the old pricing structure to “Full Self-Driving” in the new one. Later this year, Teslas with the “Full Self-Driving” package will be able to “recognize and respond to traffic lights and stop signs” and perform “automatic driving on city streets.”
Hence, Tesla now seems to define “full self-driving” as a system that can handle most road conditions under the supervision of a human driver. Tesla is still aiming to improve the system enough that—eventually—it can operate without human supervision. But the new pricing structure makes things less awkward in the meantime, since Tesla can now argue that customers have already received “full self-driving” features like the ability to stop at stop signs.
Tesla’s strategy could get people killed
As a matter of business strategy, Tesla’s shift makes a certain amount of sense. The problem is that this strategy could wind up getting Tesla’s customers killed.
Think back to the story of Google’s early beta testers putting on makeup or fiddling with their phones when they should have been supervising Google’s self-driving car prototypes. It’s really hard for a human being to pay attention to the road when riding in a car that is mostly driving itself. The better self-driving technology is, the easier it is for a driver’s mind to wander and the less likely they are to be ready when intervention is needed.
This dynamic had tragic consequences a year ago when an Uber car struck and killed a pedestrian in Tempe, Arizona. Dashcam video shows the safety driver looking down at her lap for several seconds before the crash. Records from Hulu show that she was streaming a television show to her phone at the time.
Leading self-driving car companies take a number of precautions to avoid a repeat of this tragedy. Safety drivers receive extensive training before being allowed behind the wheel. Some companies limit their drivers’ hours. Many companies put two people in each car—one to drive and the other to deal with data entry while making sure the driver stays alert.
Tesla’s plan is to essentially run a massive driverless-car testing project using its customers as unpaid safety drivers. Drivers get no real training on the dangers of inattentive Autopilot use. Tesla doesn’t limit the number of hours people can drive the cars, and the company obviously doesn’t hire someone to sit in the passenger seat.
Tesla does take a few worthwhile precautions. A Tesla car detects if the driver’s hands aren’t on the steering wheel, and it issues a series of escalating warnings—eventually coming to a stop if the driver ignores them. On-screen messages warn drivers about the dangers of inattentive driving.
Still, there’s reason to doubt that these measures are sufficient to keep drivers engaged with the driving task. And this problem will only get worse as Autopilot begins to navigate freeway interchanges, take turns, and stop for stop lights. If your car safely drives you home from work for 100 days in a row, it’s natural to stop paying close attention. If the car makes a serious mistake during the 101st trip, you might not be paying enough attention to intervene and prevent a crash.
It only takes a few seconds of inattention to miss a deadly mistake. Tesla owner Walter Huang died in March 2018 after his Model X steered into a concrete lane divider at 70 miles per hour. Poorly striped lanes caused the vehicle to drift out of its lane and into the “gore area”—a triangular area of paved road that separated the highway’s travel lanes from an exit lane. If Huang wasn’t expecting the Model X to make that particular mistake, it would have been easy to assume that this was a stretch of road that didn’t require his close attention.
Musk argues that this testing period will be fairly brief—because soon the technology will become much safer than a human driver.
“When will we think it’s safe for full self-driving? Probably toward the end of his year,” Musk said during January’s earnings call.
But that seems like another of Musk’s overly optimistic predictions. The lack of lidar will make this particularly difficult.
Lidar is no panacea, but one thing it’s quite good for is making sure that a car doesn’t steer directly into large solid objects like concrete lane dividers or other vehicles. As recently as last October, Autopilot was still crashing into stopped cars—something Waymo cars have known how to avoid for years.
Yet even with lidar and a several-year head start over Tesla, Waymo has struggled to achieve fully driverless operation in a single metropolitan area. Tesla is working to achieve fully autonomous operation in a wide range of traffic and weather conditions on multiple continents. It’s very hard to believe that this will happen in 2019.
Correction: I stated that Walter Huang had his hands on the steering wheel at the time of his fatal crash, but Huang’s hands were last detected on the steering wheel six seconds before the crash. We don’t know if he had his hands on the steering wheel after that.