Are we on a road to nowhere with self-driving and tech-reliant vehicles?
Back in early June, streams of data destined for Lexus satellite navigation systems as part of a routine update began to accumulate in cyberspace. What drivers and their vehicles on the receiving end got however, was more of a cyber black hole. Implementation of the update resulted in the satellite navigation and entertainment systems going offline and stuck in a boot loop.
Among the vehicles affected was the RX-350, a luxury SUV offered by Lexus and replaced by the 450h in 2012. Customers took to social media in droves to drown out the silence wafting from their speakers as they wandered lost among unfamiliar streets, no longer handily marked and highlighted on their screens.
Drivers lamented the customer service performance of both Lexus and uncooperative dealers on Twitter, as others uploaded YouTube videos of their situation. Some were so irritated that they warned prospective Lexus buyers away from the brand.
The solution to the problem discovered by some drivers was almost comically simple. All was fine once you turned the system off and on again – albeit by disconnecting the vehicle’s battery. Unfortunately, the simplicity of this measure – worthy of a glib sitcom writer – made for an even more embarrassing situation for the many Lexus customer service teams who had initially found themselves stumped by the issue.
Maybe we shouldn’t be too judgmental towards Lexus though. The F-35 fighter jet produced by Lockheed Martin experienced a similar glitch with its radar system that also required the IT Crowd’s answer to everything. It also initially had structural defects that made it vulnerable to thunderbolts (as in Zeus, not the WWII fighter); a little embarrassing given its moniker of the Lightning II and the price tag of $100 million.
Lexus later confirmed that a faulty application was to blame.
This is something of a modern, first-world problem that we have all become familiar with. Whether it’s the latest smartphone, tablet or laptop, we know they routinely go wrong. From the dreaded Xbox ring of death to blue screens of doom, a software update gone wrong is something we both dread and accept as part of our ongoing relationship with technology.
Logically, as more advanced automation becomes integrated into vehicle systems and remote updates become more common, so will such problems. And recent, more serious incidents highlight that there’s more than our convenience at stake.
Joshua Brown, a former Navy Seal turned technology entrepreneur, was killed in Florida whilst not driving his Tesla Model S. The electric vehicle was equipped with an autopilot function that saw it heralded as one of the first available ‘self-driving’ cars. It appears that as a truck and trailer turned left in front of the vehicle, the autopilot function failed to distinguish between the white side of the trailer and the brightly lit sky, and didn’t apply the brake.
Tesla stress that driver’s need to stay alert and keep their hands on the wheel even when autopilot is engaged. It is the first death accredited to the technology and in the 130 million miles clocked safely by Tesla drivers, compared to the average 94 million miles for everyday cars.
And in the last few days, one of Google’s self-driving cars, coincidentally also a Lexus, was crashed into by a van driver who ran a red light. Although the vehicle and software were not at fault, Google revealed back in January that human drivers had to take the wheel 341 times in 14 months in response to software failures and potential hazards.
As drivers we have come to accept that mechanical breakdown is a possibility when it comes to operating complex machines on a daily basis and often at speed. The question is, will such new technology and all the advances and convenience it brings become accepted too, along with the downside and potential risks?