Off the Map: Exploring Liability in the Age of Self Driving Cars

Off the Map: Exploring Liability in the Age of Self Driving Cars

With preparations for their deployment already underway, the debate over self driving cars has created a clear divide in perspectives: there are those who eagerly await their arrival, and those who just, don’t. But with traffic taking more and more time out of our daily routines and distracted driving causing thousands of deadly car crashes per year, even naysayers would admit that taking something as challenging and dangerous as driving off the table could be enjoyable. What’s more, aside from the obvious solutions like reducing accidents and saving lives, self driving cars are expected to provide even more societal benefits, like reducing fuel consumption and providing mobility to those with visual, age-related, or physical disabilities that make driving difficult.

With human error still the primary cause of collisions on the road today, self driving cars have the long term potential to significantly reduce the risk of fatality associated with driving (some 40,000 people died on roads in the US last year, and that figure is rising). In the meantime, though, creating a mix of bicycles, pedestrians, newly-designed autonomous cars, and standard cars to share the road could do more harm than good. The early years would certainly test our collective knowledge (or lack thereof) about the limits of self driving technology, but in real time, at full speed. This transitional period during the first deployments could conceivably make anyone who is on the road with a self driving car a veritable “canary in the mine,” traveling in conditions that may be just as, if not more dangerous to drivers and passengers than the scenario that we have today.

A harrowing real-life example showing the risk of this “autonomous ambiguity” took place last year on May 7th, 2016, when Joshua Brown lost his life after his Tesla Model S traveling at 74 miles per hour sliced under a semi-truck trailer that had turned across his path near Williston, Florida. By sheer chance, no one else was hurt. At the time, Brown was using Tesla’s signature Autopilot feature and it failed to detect the white side of the trailer against the bright sky (an assessment of the car’s computer logs after the accident revealed there was nothing technically wrong with Brown’s Autopilot system, which had warned him several times to keep his hands on the wheel before the accident). In the aftermath and investigation that followed, the cause of Mr. Brown’s untimely death was unsettling, as it brought to light arguably the greatest issue that automakers and consumers will face as self driving cars get closer to being sold in market: he relied on the technology to do more than it actually could. As long as there are no existing standards or clear guidelines for self driving capability, there will remain a danger for drivers to trust the new technology too much and too quickly. With such deadly consequences, it is essential for self driving car manufacturers to keep people from making that mistake by clarifying the functional limits of autonomous driving systems and reminding drivers to remain engaged while using them.

Some recent advancements in self driving technology have helped to deliver that message, at least a bit. Self driving technology has developed enough for the Society of Automotive Engineers (SAE) to categorize cars based on their ability to function independently, ranging from “limited driver assistance,” where the car offers built-in features to aid a controlling human driver, to “completely autonomous,” (like Google’s Waymo model) where the car navigates and drives without any human stimulus at all. Some self driving cars have specific features to ensure that drivers don’t mentally “check out” as soon as autopilot systems are engaged. Cadillac uses a face-scanning camera to ensure that the driver’s eyes are on the road, while Tesla has drivers read and agree to a disclaimer about the need to keep their hands on the wheel at all times (after Brown’s accident, Tesla also added an associated lockout system that will bring a car to a stop with hazard lights flashing if the driver ignores three consecutive warnings to keep their hands on the wheel).

Although vehicles traveling on public roads are subject to both Federal and State jurisdiction, states are generally responsible for establishing liability and insurance rules. The federal government corroborated this last year in its “Federal Automated Vehicles Policy” providing guidance on self driving cars. As of now, it’s still unclear how self driving cars will change the world of auto insurance and if or how states are preparing. Presently, “the [auto] insurance industry is most focused on getting access to the information it will need to determine who or what is at fault in a collision,” says Bob Passmore, an assistant vice president with the Property Casualty Insurers Association of America. In any given car accident, drivers involved will typically make a determination of who was at fault. Otherwise, the police and/or insurers step in and conduct an investigation to do so. As cars become more automated and the line dividing technological error and human error becomes blurred, that decision to determine liability will become increasingly difficult and eventually, impossible, without an evaluation of black box evidence. In Passmore’s words, “it’s not just what you did or didn’t do, it’s what the car did or didn’t do.”

If the new self driving car technology could complicate even simple accident investigations, how will it affect who is held liable when an autonomous car accident results in injury or death? That also remains to be seen. But in theory, if autonomous cars make a driver little more than a glorified passenger, there may not be a compelling reason to prosecute them (criminally or civilly) if a fatal accident takes place, especially while the car was in control. For this reason, the National Highway Traffic Safety Administration (NHTSA) has encouraged states to revise their tort laws to hold autonomous cars responsible when there are crashes. Of course, giving up a lot of oversight and control to cars would keep phones and other tempting distractions from endangering drivers and their passengers (studies have shown that self driving cars are far safer than human-driven cars). But when deadly or injurious crashes do occur, automakers should brace for that greater power to translate into greater responsibility, with a shift in liability from driver to manufacturer. Although logical, a collective tendency to sue automakers for autonomous car accidents may be misguided because it could create more red tape in the civil justice system for tort victims who sue to seek compensation. If a self driving car’s software and various other components are made by many different manufacturers, assessing these numerous interworking systems could create a nightmare when it comes to determining fault in an accident case. With more progressive technology, trying to figure out what aspect of the car failed and who is liable is difficult and expensive (for instance, evaluating a car’s black box evidence as mentioned earlier would most likely require an expert to do, at an expert’s fee). Combine that with carmakers’ priority to keep their self driving software a secret, and a plaintiff’s personal injury case could morph into a lengthy and complicated product liability suit with multiple defendants.

Looking forward, legislators will have to move faster than the speed of technology if they are going to update existing law and/or create a whole new common-law framework that will include self driving cars (some recommend strict liability on the part of car manufacturers, while others propose a no-fault fund from which victims can claim compensation). At this point, it’s still too soon to bet on one legal structure over another, but one fact is certain: the race against self driving cars has started, and we should do our best to stay ahead. 


Sources and further reading:

“Behind the Wheel: Who’s to blame when self-driving cars crash?” Steven Seidenberg. ABA Journal, Volume 103, July 2017

Who’ll be responsible when self-driving car crashes?” by David Gutman

How Self-Driving Cars Work, and When They’ll Get Real” by Bill Howard

Tesla driver in fatal Autopilot crash ignored safety warnings” by Steve Dent

Self-driving cars are confusing drivers–and spooking insurers” by Jack Stewart

What will happen when a self-driving car kills a bystander?” by Jack Stilgoe

Car companies’ vision of a gradual transition to self-driving cars has a big problem” updated by Timothy B. Lee

No Comments