The Ethics of Self-Driving Cars

Only just last month, 40 year old Joshua Brown tragically lost his life whilst being driven by his new Tesla Model S car. The car’s autopilot feature was unable to recognise the “white side of the tractor trailer against a brightly lit sky” that had driven across the car’s path. The first tragedy of its type, the US government is now investigating the matter and the outcome will no doubt set the legal precedent for self driving vehicle crashes in the coming years. Thefullapple investigates the debate around self driving cars and the life or death decisions the cars may have to make.

Automated Vehicles (AVs), self driving cars or simply “cars” as they will be known in 25 years are rapidly increasing in popularity. Self-driving cars will be safer, decrease traffic congestion, and be more fuel-efficient than their manual counterparts. The technology for these cars has taken off in recent years about as quickly as the quality of the English football team has crashed, but there are also some ethical dilemmas involved with cars on autopilot zipping around the city: if the car is faced with an unavoidable accident, should it protect the driver or minimise the total loss of life? And what does this mean for the rest of us if we’re not lucky enough to be sat in the driving (or now chilling) seat?

Source: carandman
The Tesla Model S – Source: carandman

Who’s doing it?

Not only the world’s major car companies, but also tech giants such as Google, Apple and Uber have started work on the technology that will drive AVs. Tesla are the most advanced of all with their Model S car that is already on the roads. The cars have sensors that detect objects as far as two football fields away and process all the collected information to safely navigate the vehicle through the busy city traffic.

Ok, but what makes these vehicles so great?

94% of traffic accidents are due to human error. This could be dramatically reduced by the introduction of AVs since computers can react instantaneously and reliably. They also have the ability to collect much more data about the surroundings than a human physically can (see big data) and so can identify more potential hazards. Compare this to a human for a minute – we sometimes need to post a really urgent selfie on instagram, show off to our friends by driving like an F1 driver or hold a Starbucks venti skinny caramel shot-macchiato. There is, after all, a reason why the saying goes “we’re only human.”

A particularly fascinating idea is that AVs would also allow people who currently are not able to drive themselves the freedom to travel, such as the old, disabled, deaf or blind, thereby greatly increasing their independence and thus their quality of life. Let’s also not forget the very young – no longer would you have to drop your kids to yet another soccer practice.  Optimised decision making would also reduce slow traffic and congestion. Once AVs dominate, you could even imagine that upon entering a city, you would hook your car to a central computer, which directs the cars through the streets in an optimised manner. Reports have yet to indicate whether this has the power to transform reactions to Monday morning congestion levels from ‘this makes me wanna kill myself’ to ‘An extra 20 min nap in the traffic? Don’t mind if I do.’

The average person spends 4.3 years of their life driving and another 3 months of it sitting in traffic. This is a depressing statistic when you consider that you only spend 48 days having sex and just 14 days kissing, but AVs change all that – in the self-driving car, you could sit back, relax, listen to music, learn a language or catch up on your favourite TV-show. But what if while you’re asleep in your car, you get drawn into a situation where a crash is inevitable? What decisions will the car make?

Ethical challenges

There exist a number of scenarios when the software is faced with an unavoidable accident – i.e. when the car is faced with the impossible question “who should I kill?”. If you thought some sort of AI robot/droid would be the first machine to start playing god and deciding who to kill, think again.

Source: http://arxiv.org/abs/1510.03346
Source: http://arxiv.org/abs/1510.03346

For the first scenario, almost everyone would agree on a common plan of action. To reduce the death toll, the car swerves in order to kill 1 passer-by instead of a group of 10 people on the road. This is the classic utilitarian approach which demands that any moral question is decided in such a way as to reduce suffering, or in this case the death toll. A very morbid discussion we know, but an important thing to think about.

A more delicate example is shown in c, where in order to save the group of 10, the car has to sacrifice its owner. Similarly to before, a utilitarian approach would be to minimise the death toll and let the car self-destruct. Studies have shown that the larger the number of lives saved, the more likely it is that people are willing to sacrifice themselves. What a heroic species we are!

The question here is – who do we allow to make these decisions? As heroic a species as we may be, if the owner of the car has programming rights to decide who dies in the above situation, in the face of death, it’s likely he will want to save himself. And if the car is pre-programmed for the greater good, how many of us are actually willing to purchase such a car? The more subjective the outcome of the decision, the harder it is to make the decision. Your own personal answer might also depend on whether your kids are in the back of the car.  Do we go as far as to say that saving a four year old is worth more than saving a 60 year old chain smoker? Should we also decide to avoid a motorcycle by swerving into a wall, considering that perhaps, the driver of the car has a greater chance of survival that the less protected bike rider? Or, like with everything else in our society, will the importance placed on your life by your car be directly proportional to the amount you paid for your car’s security package?

That sounds kinda scary… so how are these companies expecting to sell any?

According to a recent paper on the subject, such an ethical algorithm would have to satisfy three conditions: it would have to be reasonably consistent, it should not cause public outrage, and it should not discourage buyers. We’re not asking for too much, hey?

A car that self-destructs in any of the above cases would probably not be that popular with the buyer, apart from a small community of wannabe James Bonds. If the public is unwilling to adopt these cars, we lead to problems not just economically, but also ethically: the more AVs are on the streets, the safer the traffic, which saves lives. A car that always saves the driver on the other hand, while popular with buyers would unlikely be accepted in society. More importantly, consistency is required: All cars should act the same, so that the driver cannot select the ‘ethical modes’ of the car.

bond
Source: Jerry Garrett

We believe that a good solution to this scenario would be an algorithm that acts as a perfect utilitarian, saving the 10 people over the one, and saving the 4 year old over the 60 year old. It would have to register how many passengers are onboard, and be aware of the details of the situation, calculating carefully how to minimise the impact of the accident.  

So sitting down, choosing your driving modes (2 wheel drive, automatic lights & wipers and now ‘utilitarian death approach’) and registering the physical and mental attributes of all passengers on-board could be the new ‘key in ignition and go’. But even though it is a bit more effort than benefit in today’s current tech climate, it’s fair to say that the more AVs we have, and the more technology advances, the less we’ll need to worry about these disastrous situations arising in the first place.

Thefullapple wonders what’s next in this age of automation. A self driving bike? Self walking shoes? Perhaps the real question is, what will we actually still be able to do ourselves in 50 years time?

One Comment Add yours

  1. Anonymous says:

    Interesting post. Another question is in case of an error or malfunction of the software who is legally responsible – the driver or the car manufacturer

Leave a Reply

Your email address will not be published.