It’s a good moral question.
Similar to the one where test subjects are offered the scenario of a runaway train. If it continues, it will kill their child. They have the choice of pulling a lever to move a set of points and diverting the train into the path of four track workers, who would certainly be killed by the train.
It’s much more of an ethical and moral question than one about autonomous cars.
Having said that, the auto industry is having deep doubts about so-called level-3 autonomous control.
There are six stages on the path to autonomy. Most talk about five steps and call them levels 1 to level 5.
See BMW’s explanation for example.
https://www.bmw.com/en/automotive-life/autonomous-driving.html
Broadly, Level 1 offers systems such as lane warning advisories, but the driver always chooses whether to act on those advisories
Level 2 is the highest we have available today – this is the Tesla Model S, for example. The car can steer for itself and avoid other vehicles, but will relatibely often require intervention from the driver.
Level 3 is the most problematic. The driver can hand over control to the vehicle systems for extended periods of time, but the vehicle is likely to request intervention in situations that it cannot understand
Level 4 will still have driver-operated controls, and may occasionally require the driver to take control.
Level 5 will not even have driver-operated controls. It is fully automatic.
The sixth level is fully manual – such as cars from the 1950s, and even some from the 2000s. You might call it level 0
For reference, there are no cars above level 2 in production today.
There are a number of questions relating to level 2 and level 3 driving. These relate to many issues where technology is in advance of legislation. There are some interesting ones:
1. Who is responsible when the car causes a collision when in automatic mode: the manufacturer, or the driver?
2. How long does driver training take? Is the training programme longer or shorter than for a fully manual vehicle?
3. In a world of GDPR, is the vehicle permitted to send data about driving conditions to the vehicle maker, or national road authorities in order to enhance safety?
4. When designing a level-3 autonomous vehicles, do we prefer to create false positives (a bit like crying wolf, and risk drivers ignoring a warning to take control), or false negatives, which will probably lead to a collision with no warning.
The reality is that when drivers become habituated to autonomous driving for extended periods (such as on a quiet motorway), they pay much less attention, and take much longer to respond when the vehicle insists they take control under dangerous circumstances.
Competent drivers, who can respond to an emergency in less than a second when alert and driving, take much longer – many seconds – to respond in the same way, after an extended period of allowing the vehicle to have control.
That increased delay leads to more collisions, as the vehicle systems tend to identify emergency situations later than an alert driver, giving less tome to respond.
That’s not to say computers will never take over the task of driving. That happens, for example, on the Docklands Light Railway, and the POD system at Heathrow terminal 5.
In those systems, the routes are highly controlled and the extraneous traffic is very severely limited.
As a result, the OECD has suggested that level 3 vehicles should not be developed, never mind be allowed on the roads with other vehicles.
One of the challenges is that many of the systems are being developed by engineers – and there are very significant engineering challenges. However, as TTT loves to remind us there is a much greater psychological challenge in building confidence in the capabilities and safety of the technologies.