By Tanner Bowen
Tanner Bowen is a junior at the University of Pennsylvania studying business.
Aside from dreading taxes, drivers of automobiles have always faced another unpleasant reality of life: accidents. Whether this involves rear-ending another driver or slamming into that pole in the parking lot, we are subjected to the headache of who is liable for the accident and how our insurance will cover it.
Flash forward to the present, and the development of autonomous vehicles is well within our grasp as a society. Numerous car companies have enabled assisted driving features such as automatic brakes that activate if the car senses an object that is too close. On the other end of the spectrum is Tesla Motors Inc., which in 2015 activated its autopilot mode (which allows autonomous steering, braking, lane switching, and a host of other features). But this summer, a Tesla Model S using autopilot struck a big rig while traveling on a divided highway in Florida, killing the driver. 
This accident poses an interesting legal question for all drivers on US roadways: if a car using autopilot crashes, who is liable: the driver or the manufacturer?
This may not come as a surprise, but the popular (and somewhat simple) answer in the legal community seems to be the car manufacturer. This is in part due to arguments that if a car is fully self-automated and does not require the attention of human drivers (which seems to be the goal for companies such as Google), then if the car’s sensor does not react in time to avoid a potential crash, the manufacturer must foot the bill. 
Is this a bad thing? Maybe not. In an ideal world, the car with autopilot features would be “smarter” than the human who attempts to drive it. For example, data from the Insurance Institute for Highway Safety shows that automated crash-avoidance braking could reduce total rear-end collisions by around 40 percent. 
But as of now, with a myriad of quasi-human and quasi-autonomous driving features in vehicles, most car manufacturers are still attempting to shift the liability onto the human drivers. For example, General Motors’ upcoming Super Cruise (similar to Pilot Assist) will require humans to remain alert and take over steering if visibility decreases or some other exogenous factor arises while driving.
Even if the future seems bright for self-driving cars and companies develop the proper technologies to provide a safer driving environment, we still live in a legally confusing world of automation where it is not always clear how one can delineate the proper roles of humans and machines.
Although they risk starting to sound like a bad science fiction movie plot, several insurance companies and legal scholars have considered a solution to this sticky situation of discerning between humans and machines: giving robots personhood.
Now, I doubt that many people are saying that your car should be able to apply for a mortgage at the bank or vote in the general election in November, but with the development of Pilot Assist technologies, there needs to be a legal mechanism to separate the human and the car and allow the liability to fall on the vehicle itself.
Consider the following thought experiment: say a car with assisted driving technologies swerves because it sees a deer in the roadway. However, in the process of doing what a good algorithm should do (i.e. avoid an unnecessary accident), the vehicle hits a car in the other lane. Who is liable? According to some legal theorists, this would be the car’s fault. In this universe, the car would be an insurable entity and would provide a faster insurance payout to those passengers while also adding a shield against frivolous lawsuits. 
Although this might be an ideal world to some, others find that assigning liability to vehicles merely defers responsibility from the human passengers. There are pros and cons to each argument.
In our increasingly technology-based world, it is no surprise that we have to address this question of the liability involving machines. Our tort system is already complex, so adding robots into the mix of the existing framework of liability against humans and corporations seems like the next logical step. It will be interesting to see how insurance companies and producers of assisted-driving technologies continue grapple with this issue as technology attempts to improve our lives.
 Fleming, Charles. "Tesla Car Mangled in Fatal Crash Was on Autopilot and Speeding, NTSB Says." Los Angeles Times. July 26, 2016. Accessed September 14, 2016. http://www.latimes.com/business/autos/la-fi-hy-autopilot-photo-20160726-snap-story.html.
 Iozzio, Corinne. "Who's Responsible When a Self-Driving Car Crashes?" Scientific American. May 1, 2016. Accessed September 14, 2016. http://www.scientificamerican.com/article/who-s-responsible-when-a-self-driving-car-crashes/.
 Madrigal, Alexis C. "If a Self-Driving Car Gets in an Accident, Who—or What—Is Liable?" The Atlantic. August 13, 2014. Accessed September 14, 2016. http://www.theatlantic.com/technology/archive/2014/08/if-a-self-driving-car-gets-in-an-accident-who-is-legally-liable/375569/.
Photo Credit: Flickr User smoothgroover22
The opinions and views expressed through this publication are the opinions of the designated authors and do not reflect the opinions or views of the Penn Undergraduate Law Journal, our staff, or our clients.