Somehow it’s made it’s way into the zeitgeist but I’ve heard more than one person talk about the Trolley Problem recently. Most likely due to autonomous driving and AI, but I think it’s certainly worth addressing at least from a human perspective.

I have severe problems with the way that the Trolley Problem is framed. For the uninitiated the Trolley Problem is a thought experiment that, simply put, determines who lives and who dies. Here’s how Wikipedia describes it:

There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

1. Do nothing and allow the trolley to kill the five people on the main track.
2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?

Here’s the problem I have: the way this problem is framed is too simplistic. Honestly it may be nefariously so given the intentions of the person asking, but I digress. Fundamentally it comes down to value (real and perceived) placement and ethical and moral priors. Even with all things being equal it’s impossible to answer this question because life is nuanced in ways that almost everyone isn’t even conscious of. Decisions of life and death cannot be summed up in simple thought experiments.

Frame it in this way. Of the five people on one side of the track there’s a physician, a father of two, a leader of (your) faith, a business leader that employs thousands of people, and a known and confirmed terrorist. One the side with one person is your child. You have to look at the types of people involved because you’re human. You’re not going to be able to stop your brain from making these value judgements. Maybe you hate your child, so he goes. Maybe your child is a known serial killer but at the same time you hate the terrorist and the physician is an abortion doctor with whom you have deep moral disagreements - how do you handle that then? Conversely, what if you love your child dearly, but everyone on the other side is a complete saint? The arrangements are infinite.

And yes these are absurd examples but the Trolley Problem itself is absurd when posed to humans.

Implications in AI

This is scary to me. The Trolley Problem when looked through the lens of AI has to be solved or at least computed to some degree. For example there’s an autonomous vehicle whose field of vision has determined on one side there’s a group of school children, in the middle there’s a family, and on the other side there’s a group of veterans. All three groups have significant value to society. The vehicle cannot stop in time to not hit any of them, it must hit one given it’s speed and direction. How does the vehicle compute this?

The easy answer is to compute the value of each group and make a decision. The veterans are old yet inspire patriotism in their community. The school kids may contain future doctors and scientists. The family may be wealthy and donate millions in charity every year. Again, it comes down to placing value on things and people. The problem comes in determining that value. Value can be (empirically and theoretically) subjective in ways that you could never imagine.

Would it be so bad for the vehicle to not derive a value at all? If all are of immeasurable value then random selection seems to be the least bad answer. If one group should arguably be the victim (for example, a group of terrorists) could that value be determined in enough time? What if Mother Teresa is in the middle of that group of terrorists attempting to convert them? More over, what if the person building the learning models and AI for that vehicle is sympathetic to terrorists? Things get sticky, quickly.

Even in AI/programmatic scenarios there is bias and prior assumptions. I’m not so sure there is a right answer.