The self-driving car may seem like a thing of the future, but automated tech is more fact than fiction these days. Okay, so the hover board and flying car are still a fair way off, but all modern cars in today’s showrooms have some level of automation.
There are brake assist systems that push the pedal much harder than your foot if you suddenly hit the skids; stability control systems that send or pull back power from individual wheels after grip is lost; and radar cruise control and crash avoidance systems that can sense an impending collision and automatically brake or even try to steer away from the impact. There are also a few (quite affordable) cars that can perform a perfect parallel park all by themselves.
It’s only a matter of time before we will see higher levels of automation filter through to production. But do we actually need the driverless car?
Ask anyone without the means to drive a car – for example, the disabled and the elderly – and the answer would be a resounding yes. If our vehicles had a total level of automation, these people could enjoy the freedom that a vehicle brings.
Some may argue that there are taxis and buses to fill these needs. But a car allows you get away on a spur-of-the-moment road trip without needing to pay a stranger to chauffer you.
For those perfectly capable of operating a car, freedom comes in the form of time. As our transport networks struggle to keep up with the demand of a burgeoning population, the trip to and from work can take up a substantial portion of each day. If we were driven by our vehicles, we could get more work done, get more sleep, or even have time to read that dog-eared novel that only seems to come out on holidays.
Not to mention almost eliminating the three biggest killers on our roads today. A car isn’t going to get drunk or take drugs, it doesn’t get tired, and it could be programmed not to speed or break the road rules in any way.
Universities, manufacturers and businesses such as Google have been testing autonomous vehicles for several years now. The latter is aiming to release its first self-driving production car as soon as 2017 – the same year as Audi, Toyota and Volvo are scheduled to release semi-autonomous sedans.
These autonomous cars, currently in the testing process, rely on sensors combined with GPS technology to constantly scan its position and calculate the best way forward. The sensors in the Google Lexus SUVs and two-seat prototype cars claim to detect objects two football fields away in every direction. These are not just big objects like people and other cars, but ‘fluttering plastic shopping bags and rogue birds’. The GPS technology, combined with predictive software which forecasts the trajectory of other cars (and those pesky birds), allegedly enables the vehicle to navigate the streets safely.
The Google cars have ticked over 1.6 million kilometres of autonomous testing, mainly in the US and always with a human behind the wheel as a safety net, and it hasn’t been flawless.
One of the test vehicles famously crashed near Google HQ in 2011. And, as recently as last month, there was a minor bingle at an intersection. Of the 12 “reported” crashes that Google’s put its hand up for, it claims none were the result of autonomous driving and all but one were the result of other distracted drivers rear-ending the test cars.
Manufacturers including Audi and Volvo are also claiming a zero at-fault crash rate through their testing processes. And so it should be – the only way this kind of tech could work on a large scale, and be accepted by the public, is through a flawless network, with each car perfectly synced with other vehicles and its surroundings.
If this happens, and we give up control of our cars, what would happen in the event of a hardware or software malfunction? If the network went down, if someone hacked the system, or there was any kind of glitch, then we might really see some carnage.
The Google prototype car doesn’t even have pedals or a wheel, as it is designed to transport its passengers around in complete autonomous comfort; this means lounge seating and plenty of legroom. This means the driver can’t intervene if there is an impending accident, or move the car out of the way if one did occur.
The post-accident blame game would then play out with interesting results. Who would be liable: the manufacturer, software developer, or the hapless passenger?
The network on which autonomous cars will run, and the supporting infrastructure to sustain it, would not only need to be created and installed over time in every corner of the country, but it would have to be absolutely infallible. People would have to trust it implicitly.
This is the main reason why the driverless car is not around the corner; we humans simply aren’t ready – physically or emotionally – to hand over control of the wheel.