Skip to content
Home » Will you let self-driving cars make moral decisions? πŸš—

Will you let self-driving cars make moral decisions? πŸš—

Morals of self driving cars

However you may feel about autonomous vehicles (AV), the biggest companies in the world can’t seem to put enough money trying to make them a reality. Waymo, the company thought to be in the lead, has spent a whopping ~$3.5 billion ! It is one of those things that will massively change the landscape of how we live our daily lives.

With the advent of AVs, we hope to see an increase in efficiency in traffic flow and lower carbon emissions. Think of basically lots of carpools, which don’t get road rage, which act in very predictable manners, and can make split-second decisions in case of unexpectedness. It would probably be better for the deer 🦌 population in Pennsylvania as well.

For companies like Uber, it means a more predictable supply of drivers, while for drivers, it could mean massive unemployment and retraining requirements. But before we can get ahead of ourselves and start to worry about our jobs, or celebrate for fewer people flipping us off on the road, it’s vital to think about why self-driving cars are just so damn difficult to get right.

The Dilemma

A few decades ago, even the smartphones of today would’ve been considered pretty much “impossible”. Looking back at the massive leaps in technology, we sometimes forget the incremental changes that have led us to where we are. So it’s only reasonable to assume that cars will also evolve to become AVs, and maybe even something we don’t even consider as cars.

The problem isn’t really with the technical aspects of making self-driving cars. It’s that the world most of us live in right now, isn’t built to handle them. We have designed this world with the thought of people making the decisions, not machines.

What it means to make decisions quickly

Let’s assume that we have a car, SuperAutonomousVehicle, or Sav. Now Sav can basically do what you as a driver can, but it can do it with precision, without getting tired, and with intention. Basically, Sav is the best driver. It has no flaw and everything it does, every action it takes has no randomness in it.

One day, you’re taking a drive in Sav, and suddenly, the brakes stop working. Sav, although being a perfect driver, is not an immortal being, so it can only react. But you’re thankful that you’re not the one driving because as humans, we tend to panic a lot. Stressful situations are not the best place to make quick decisions, especially ones that involve our lives.

Uh-oh, you see a crosswalk up ahead with some people. Sav also sees this and *beep beep* later, it realizes it has only two possible outcomes. Either veer off to the side and avoid hitting the pedestrians but fatally injuring you, or drive straight and mortally wound the random people but save your life.

The moral dilemma of who should Sav give preference of life to? And if it does make a choice, how will we as society react to it?

Why is it such a big deal?

If you or I were in the situation what would we do? It’s very likely we wouldn’t even be able to think of all possible outcomes or determine the probability of surviving. We would most likely, wait too long to make a decision, and then end up making a subpar one, quite possibly injuring everyone involved.

Why then would it be a problem if Sav did something more optimal? For one, optimal is very subjective. Optimal for everyone in this situation would be their own survival at least. The problem isn’t that we make a bad decision, but the intent and reasoning behind our decision.

It’s an undeniable truth that, as humans, we suck at making decisions. (Un)Fortunately, Sav isn’t afforded the same benefit of the doubt. When it makes a decision, we know exactly why it’s making a decision. There is no randomness involved. If it chooses to save one life over another, we know that it made a calculation where it assigned value to the lives of people.

So would you want these cars on your streets? Would you be comfortable riding in the car? If Sav‘s company asked you to sign a disclaimer releasing them from any liability, would you do it? Or would you want to choose beforehand how Sav should make these decisions?

Giving Morality back to the people?

Okay, so we know that Sav can only make decisions of life and death based off its algorithms. We could make it so Sav behaves randomly in those situations where it only has one or two choices. But I really don’t think that we as society would really be happy with something that was built to be more accurate and decisive, suddenly choosing to let life be decided by random chance in a place where it has complete control.

Enter, the MoralMachine.

The MoralMachine is a platform set up by the MIT Media Lab, to gather perspective on how we would make the difficult decision that Sav has to make. The idea is to ask us these questions while we are in a state of mind to think properly, similar to how computers aren’t emotionally affected by the state of whats going on around them.

The platform presents everyone with a set of scenarios, one after the other, asking what action the car should take. Each action results in some form of fatality whether it’s two sets of pedestrians, or passengers vs pedestrians.

The place where it becomes even more interesting (or shocking) is where we are tested on what we would assign more value to. Do we, more often than not, choose to save people who are younger? Do we differentiate between animal and human lives? Does it matter if someone is jaywalking?

The question that arises from this is whether policy makers and car manufacturers should work together to use this data, to democratically choose how Sav will behave. In a democracy, we vote for our political leaders, and thus the values that they stand for. Even if those values aren’t aligned for everyone, it doesn’t matter. If the majority wants it a certain way, they win.

“But how can society agree on the ground truth – or an approximation thereof – when even ethicists cannot”

The dark side of the β€˜Moral Machine’ and the fallacy of computational ethical decision-making for autonomous vehicles

Which is exactly why it’s not a good idea to let MoralMachine be used for any sort of policy making. Sure, we could present everyone with the choice to decide between demographics, but it would always leave someone less safe than others. According to the current results of the platform, younger populations are likely to benefit from this. People would essentially stop walking next to roads. Since there is always the chance of someone being younger than you who could be in an AV, against whom your life would matter less.

What we value? source: MoralMachine

Similarly, people would have to then weigh the risks of getting in a car in the first place. By doing so, they run the risk of having a younger pedestrian come in front of the car, triggering Sav‘s life-value-assignment algorithm to let you know that your life is less valued.

This is just one characteristic I focused on here, imagine what it would be like for all the other minority populations? We would start to need a morning reminder of our risk of taking the car or walking depending on the current social demographic around us. It’s pretty ridiculous.

Is self-driving just a pipe dream?

Ethics and morality in artificial intelligence has become one of the biggest challenges for the drive to incorporate more and more AI in our lives. Companies have a bias towards making profit, and individuals have varying values they wish to stay true to. And any technology that becomes available will have certain biases in them, whether it’s based on the engineers or the organization.

For self-driving cars, we need to change the setting and the framework in which we evaluate its behavior.

1. Redesigning structure

When we think of self-driving, we imagine the world as it is, with the difference being self-driving cars. Instead, maybe the solution lies in also updating what the world, or at least the transportation section, needs to look a bit different. Crosswalks, roads, traffic lights, STOP signs, all are based on the assumption that humans need to coordinate with one another to maintain safety. Self-driving cars wouldn’t necessarily need the same infrastructure. Similar to how subway systems are in parallel with roads, maybe we need to also redesign cities to incorporate pedestrians differently than we currently do.

2. Re-framing how we think

It is important to note that although the MoralMachine makes us believe that AV will have to make decisions based on morality, it’s not really beneficial to try to incorporate moral and ethics into machines. Since we as humans haven’t been able to figure out what is morally correct, we have no way to actually incorporate that into machines to execute.

It is not whether a car ought to kill one to save five, but how the introduction of the technology will shape and change the rights, lives, and benefits of all those around it.

The folly of trolleys: Ethical challenges and autonomous vehicles

The argument here is that AI behaves using probability to determine what to do. It will try to maximize the total benefit based on the different probabilities of events, as well as the value of the outcome from taking those actions. We should not and cannot model the values of lives of different demographics or populations into its decision making.

If you’ve seen any science fiction movies where robots try to protect humans from themselves, like “Wall-E”, assigning morality tasks to AI is usually where things start to go wrong.

So rather than focusing on what Sav will do in those one-off situations, we need to understand how Sav actually makes decisions and remove as much bias from it. Thinking of the morality problem as posed in MoralMachine is a distraction.

Resources