preloder

Driverless Vehicles: A Modern Trolley Problem

As automobile companies — Tesla, Volvo, BMW — venture into the realm of autonomous vehicles, a future of driverless cars seems to be within our reach. However, while we lose ourselves in pleasant daydreams of relaxing during peak-hour traffic jams, a problem has emerged. How should a self-driving car handle a potentially life-threatening situation — especially when more than one life is at stake?

A Problem of Analysis

A 2015 report by McKinsey & Company predicts that autonomous vehicles will reduce road accidents by up to 90%. However, these machines are not infallible. Their very nature subjects them to Moravec’s Paradox, which means that processes that are second nature for humans are much harder to program into artificial intelligence (AI). Think about it: you are bombarded by millions of stimuli when you drive even a short distance. You need to be aware, at the very least, of where you’re going, your position in relation to other vehicles on the road, traffic signs, pedestrians, cyclists, and other obstacles. This kind of analysis requires an enormous computational capacity from machines. As a result, actions that require split-second decisions (like braking at an instant to avoid hitting pedestrians) are exceptionally difficult for AI to process and for engineers to code.

The Main Dilemma

This brings us back to the main problem — what happens when a pedestrian darts in front of an oncoming driverless car? If the vehicle is moving above a walking pace, it may be unable to slow down or stop in time to avoid a collision. As a next step, does it swerve into oncoming traffic, causing a major traffic accident and potentially injuring the passengers of the vehicle? Does it drive onto the pavement, injuring other pedestrians? Or does it continue on its course, hitting the pedestrian? A human driver will make a decision based on multiple factors including instinct, biases, and their own moral code. AI, on the other hand, will do as its programming demands, choosing the most viable option. This ultimately boils down to which ethical theory the program’s engineer follows. A utilitarian, who believes all lives have equal value,  would program the onboard AI to harm the fewest people, while someone who places personal safety as a priority will program it to protect its passengers at all costs.

Ethical Legislation

The German federal government has taken steps to try and solve this issue. In August of last year, an ethics committee on automated driving presented a set of guidelines that stressed that driverless vehicles must do the least amount of harm when put into these situations. For example, if hitting a pedestrian would kill him, but swerving into a divider would only injure the passengers, the car should swerve. However, it is unclear what parameters allow the car to determine how much damage is ‘too much’.

The prospect of standardized guidelines in Germany bring forward a new question: Is this reasoning viable across the globe? For instance, it is difficult to imagine its application in an Indian context. Our roads are fraught with obstacles that are hard to predict or even detect — from potholes and ongoing construction to vehicles driving on the wrong side of the road. How then should an autonomous vehicle determine whether to  drive into a pothole or hit a cow in the middle of the road? Further, our ethical issues only deepen when we take into account the various cultural norms prevalent in our societies.

However, as the German ethical committee observed, countries that adopt this technology will experience a net benefit, as AI would decrease the instances of accidents caused by human fallacy. As it stands, the conversation on the ethicality of self-driving cars is far from over — and is just as vital to creating a future filled with autonomous vehicles as the technology that drives them.