Cars that drive themselves aren’t science fiction any more and companies such as Google, Uber and many vehicle manufacturers are actively testing the technology on public roads.
Just how safe are autonomous cars, though? In the US, a woman was recently hit and killed by a self-driving car that was part of taxi app Uber's fleet. The car was driving itself – although a human 'monitor' was also inside – and it's thought that this is the first time an autonomous vehicle has been involved in a fatal collision.
There have been other non-fatal accidents involving autonomous cars, though. In March 2017, Uber suspended the testing of self-driving cars following an accident in which a Volvo XC90 crashed into another vehicle at a junction. At the start of 2016, one of Google’s fleet of Lexus SUVs collided with a bus in California. It wasn’t the first time one of Google’s cars was involved in an accident, but it was the first time the company admitted that the car was partially responsible for it.
And what about cars that aren’t yet fully autonomous but are part of the way there – how safe are they? The latest Audi A8, for example, will be able to accelerate, brake and steer itself at speeds of up to 37mph from later this year; while Audi says it's up to governments to decide if the technology is legal to use, the technology itself is actually ready.
The difficult road to autonomous cars
On 7 May 2016, Joshua Brown’s Tesla Model S hit a trailer being towed across the road. Neither the car’s sensors nor the driver spotted the trailer, so the brakes weren’t applied. The car’s windscreen hit the bottom of the trailer and Brown was killed.
Tesla's Autopilot system was engaged at the time and, according to some reports, Brown was watching a film on a portable DVD player at the time of the crash.
As well as expressing its condolences, Tesla was quick to defend Autopilot’s safety record. In a blog post a few weeks after the crash, the company stated: “This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles.”
However, Tesla did acknowledge that the system had failed to detect the trailer, blaming a combination of the trailer’s high ride height and what it describes as “the extremely rare circumstances” that allowed the car to pass under the middle of the trailer rather than hitting the front or rear of it.
The accident was widely reported as the first death in a self-driving car, but the truth a little more nuanced.
Tesla's Autopilot system doesn’t make the Model S truly autonomous. It’s what the US National Highway Traffic Safety Administration (NHTSA) refers to as a Level 2 system delivering “combined function automation” (NHTSA has identified five levels of automation ranging from Level 0, which is a car with no automation, to Level 4, which means one that’s fully automated at all times).
What the Level 2 description means in practice is that while the driver may not actively be steering the car or using the accelerator or brakes, they are still responsible for the vehicle and should continue to monitor the road. Crucially, the NHTSA’s definition of Level 2 driving aids includes the warning that “the system can relinquish control with no advance warning and the driver must be ready to control the vehicle safely”.
A quick search on YouTube shows video after video of Tesla owners and reviewers with their hands off the steering wheel. However, that isn’t how Autopilot is designed to be used. In its post-crash blog post, Tesla pointed out the warning that all drivers see when the system is turned on: "When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot 'is an assist feature that requires you to keep your hands on the steering wheel at all times' and that 'you need to maintain control and responsibility for your vehicle' while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to 'Always keep your hands on the wheel. Be prepared to take over at any time.'"
That doesn’t mean there are no lessons for Tesla to learn from the crash and other non-fatal incidents involving Autopilot that have come to light since. These occurrences underline the importance of drivers using Autopilot and similar systems properly and understanding their limitations. They also show that salespeople, marketers and car reviewers need to be responsible in the way these systems are presented to the public.
What about fully autonomous cars?
If the biggest concern with today’s driver assistance systems is their potential for misuse, what about the fully autonomous systems that are now on the cusp of appearing?
Google’s US prototypes have been racking up the miles for years, but the UK is looking to catch up. Three autonomous car projects are now under way, including the Gateway (Greenwich Automated Transport Environment) project. Gateway involves a fleet of seven electrically powered self-driving cars taking to the roads of Greenwich, London. The project is led by the UK’s Transport Research Laboratory (TRL).
“The first thing we did was look at safety,” says Richard Cuerden, the TRL’s chief scientist of engineering and technology. “We’ll be following the UK Government’s code of practice for testing autonomous vehicles. That means there’s always a human being overseeing the system. The routes the vehicles are taking are fully mapped and the vehicle’s 360deg radar and camera monitor all around the car.”
So if these cars are ready for testing now, how soon will such technology reach the showroom? “We’re a long way from having a vehicle that I could get in at my home in Hampshire that would drive me all the way to Scotland on all sorts of roads – probably 10 years plus,” says Cuerden. “But a vehicle that could drive from Manchester to Birmingham on the M6 is just a few years away.”
This is the next step on the NHTSA’s autonomy scale. Level 3 cars won’t need the driver to constantly oversee the automated systems, but they will be able to travel long distances autonomously in the right circumstances and on specific road types. So the driver might be in control until the car joins the motorway, at which point the car will take over, then returning control to the driver when it leaves the motorway and returns to a more complex and demanding traffic environment.
Semi-autonomous: neither one thing nor the other?
On the one hand, sophisticated semi-autonomous cars will be less open to driver misuse than systems such as Autopilot. However, they bring with them their own issues that car companies and researchers need to grapple with before they can go on sale. In particular, the transition from being a passive passenger to an active driver is problematic. How long does it take for a driver to go from reading a book or checking emails to being fully aware of a car’s surroundings and any hazards that might be approaching?
“We’re all different in the speed at which we process and respond to hazards,” says Cuerden. “Waking someone up from a non-driving mode to being back in the driving seat will be the same. Exactly how that should be done is still being researched and debated. We also need to fully understand what will happen if there’s an emergency and the driver needs to get back in the loop immediately. This is an area that’s being researched intensely.”
Volvo is one manufacturer investigating the best way to handle this transition. “There must be no ‘mode confusion’, where the driver thinks the car is in control and the car thinks the driver is in control,” says Robert Broström, Volvo’s senior technical leader of user experience. “We will evaluate a solution where the driver pushes two paddles on the steering wheel to activate and deactivate the IntelliSafe Autopilot [Volvo’s self-driving technology]. The car will clearly inform the driver when it is time for him/her to take over the responsibility.”
The trouble with partial autonomy isn’t restricted to those moments when control passes from the driver to the car and vice versa. How will a mixed fleet of self-driven, partially autonomous cars – and eventually fully autonomous ones – interact?
It’s an area researchers at the University of Michigan’s Transportation Research Institute have examined, with concerning results. They found that experienced drivers make use of eye contact and other subtle signs to judge the intentions of road users. Without these additional clues, they might misunderstand what an autonomous car is about to do. “During the transition period when conventional and self-driving vehicles would share the road, safety might actually worsen, at least for conventional vehicles,” the researchers concluded.
It’s when fully autonomous technology is ready – and the legislation is in place to allow its use – that Cuerden sees the fewest headaches and the greatest benefits. “We will see safety improvements. We’ve done some work where we’ve looked at real collisions between cars and pedestrians and reconstructed them, replaying the collisions as if the vehicles were autonomous. We estimate that pedestrian fatalities could be reduced by 20%, and pedestrians and cyclists are the groups that autonomous cars will struggle with the most,” he says.
But what if saving a pedestrian or cyclist means sacrificing the people inside the car? If a fatality is inevitable, should an autonomous car be programmed to save its owner or a vulnerable road user?
It’s a question the Massachusetts Institute of Technology (MIT) put to members of the public. It found the majority surveyed thought autonomous cars should sacrifice their occupants, although the respondents also said they would prefer not to ride in such vehicles.
So many questions remain. However, Cuerden is convinced that, in the long term, autonomous cars will herald a road-safety revolution. “Human error contributes to more than 90% of collisions. Autonomous cars won’t make all of these go away, but we will see a big change. The reduction in harm will be on the same scale as when seatbelts became mandatory, or even more.”
Page 1 of 2