A new article in the Guardian by Christian Wolmar, author of Driverless Cars: On a Road to Nowhere, summarizes how the grand promises for the self-driving vehicles have fallen well short as lawsuits mount, cities impose restrictions, and the UK has gotten tough on overselling driving-assistance. Wolmar depicts the autonomous driving vehicle industry (save ever-stubborn Tesla) increasingly in reverse gear, with one time leader wannabe Uber having exited the business and GM (via its Cruise operation) floundering.

Wolmar provided a fine compact account, but one is still left wondering, how did such a not-ready-for-prime-time technology get so far? Much of the impetus resulted from the Silicon Valley attitude of “Move quickly and break things,” as demonstrated particularly with the deliberately mislabeled ride sharing companies Uber and Lyft. They rolled over local cab regulations and got little resistance, largely because taxis are local businesses, with not a lot of capital, and no meaningful constituency. Hubert Horan has chronicled in exhausting detail that the gig taxis never made any sense, that they had inherently higher costs than traditional cabs.

But self-driving cars, as in self-driving taxis, were a big part of the Uber/Lyft hype. Just imagine how great these companies’ economics would be if they could be rid of the expense of drivers! Of course, that argument ignored that those seen-as-disposable drivers provided the car, its maintenance, its insurance, and its gas, and typically did not understand their economics (as in saw their revenues minus gas as their profits), all the better for the ride-hail companies to exploit them. Uber and Lyft with driverless cars would mean Uber and Lyft having to invest in and own fleets. How could that big capital outlay and additional overhead possibly improve their economics?

We were skeptical of the idea that it would be possible to arrive at autonomous self-driving cars, and our concerns were confirmed by the fact all these supposedly self-driving vehicles had human oversight, as in either a driver in the car or remote human oversight. We thought that had the potential to increase hazards, since a non-driving monitor would easily space out, and then be forced to snap back to attention when the car sent an alert, and would be in cognitive catch-up mode as to what was happening.

To put it another way, the only way self-driving cars appeared able to live up to their promise would be if all cars were self-driving and ceded control to a central network, so the network would control the behavior of all vehicles, greatly limiting hazards and random events. But the “Level 5 autonomous vehicle” was based on the premise that somehow individual cars would become so smart and so good at data-crunching that they could navigate successfully even with many vehicles all operating independently, and many with pesky drivers. For instance, from Car Magazine:

The difference between Level 4 and 5 is simple: the last step towards full automation doesn’t require the car to be in the so-called ‘operational design domain’. Rather than working in a carefully managed (usually urban) environment with lots of dedicated lane markings or infrastructure, it’ll be able to self-drive anywhere. How? Because the frequency and volume of data, the rapid development of artificial intelligence (AI) and the sophistication of the computers crunching it, will mean the cars are sentient. It’s a brave new world – and one that Google’s Waymo car is gunning for, leapfrogging traditional manufacturers’ efforts. The disruption will be huge: analysts HIS forecast 21 million autonomous vehicles globally by 2035.

Wolmar describes how this Brave New World has stalled out. The big reason is that the world is too complicated. or to put it in Taleb-like terms, there are way too many tail events to get them into training sets for AI in cars to learn about them. The other issue, which Wolmar does not make explicit, is that the public does not appear willing to accept the sort of slip-shod tech standards of buggy consumer software. The airline industry, which is very heavy regulated, has an impeccable safety record, and citizens appear to expect something closer to that…particularly citizens who don’t own or have investments in self-driving cars and never consented to their risks. From Wolmar:

Developing driverless cars has been AI’s greatest test. Today we can say it has failed miserably…. Moreover, the recent withdrawal from the market of a leading provider of robotaxis in the US, coupled with the introduction of strict legislation in the UK, suggests that the developers’ hopes of monetising the concept are even more remote than before. The very future of the idea hangs in the balance….

Right from the start, the hype far outpaced the technological advances. In 2010, at the Shanghai Expo, General Motors had produced a video showing a driverless car taking a pregnant woman to hospital at breakneck speed and, as the commentary assured the viewers, safely….

First to go was Uber after an accident in which one of its self-driving cars killed Elaine Herzberg in Phoenix, Arizona. The car was in autonomous mode, and its “operator” was accused of watching a TV show, meaning they did not notice when the car hit Herzberg, who had confused its computers by stepping on to the highway pushing a bike carrying bags on its handlebars. Fatally, the computer could not interpret this confusing array of objects….

Now Cruise, the company bought by General Motors to spearhead its development of autonomous vehicles, is retreating almost as rapidly…In October, a woman crossing a road in San Francisco was hit by a human-driven car and knocked into the path of a Cruise robotaxi. Instead of stopping, the robotaxi drove over the pedestrian because it had been programmed to pull over to the right when confronted with an unknown situation. She survived but will clearly be in line for massive compensation…

Cruise… soon withdrew its robotaxis in all US cities and its CEO quit. It was revealed that vehicles were not even driverless, since the cars had been remotely controlled with interventions by operators about every four or five miles. There are now mass redundancies and the future of the development is uncertain.

And the UK has Tesla in its crosshairs:

In the US, where there have been numerous accidents with Teslas in “full self-driving” mode, the manufacturer is facing several lawsuits.

In the UK, Tesla will fall foul of the legislation introduced into parliament last month, which prevents companies from misleading the public about the capability of their vehicles. Tesla’s troubles have been compounded by the revelations from ex-employee Lukasz Krupski who claims the self-drive capabilities of Teslas pose a risk to the public. Manufacturers will be forced to specify precisely which functions of the car – steering, brakes, acceleration – have been automated. Tesla will have to change its marketing approach in order to comply. So, while the bill has been promoted as enabling the more rapid introduction of driverless cars, meeting its restrictive terms may prove to be an insuperable obstacle for their developers.

Ironically, this industry looks to have created its own woes by going off road early on. The initial impetus was a DARPA Grand Challenge in the early 2000s to build a vehicle that could drive long distances across the desert autonomously. Profit-made visionaries quickly expanded the use case to driving in built environments with their much greater complexity and risks, particularly to other people.

Perhaps these promoters should remember the lesson of the microwave. Appliance companies spend decades promoting the microwave as a replacement for the general-purpose oven, when it is lousy at lots of cooking tasks, like baking and browning. It was only when designers realized that its proper use was as a limited-use rapid heating device that sales took off.

This entry was posted in Free markets and their discontents, Investment outlook, Risk and risk management, Technology and innovation, Uber on by Yves Smith.