(Originally published on LinkedIn in June 2024)
Ah, the famous “trolley problem” of moral philosophy! This is how it goes – A trolley on rail tracks will kill a person on the tracks. You are standing next to a switch that will divert the trolley to another track but there it will kill 5 people. Will you choose to kill 1 person or 5 people?” Most people choose to kill 1 as opposed to 5. Then, the asker of the question makes it more complicated by saying – “What if that 1 person is a child but the 5 people are old and frail?” or “1 saint against 5 evil men?” or other variants. Such philosophical questions don’t have an answer and will never have one because human morality is not deterministic. “It depends!” We mostly answer when asked theoretically. In practice, however, we ourselves don’t know how we will behave. Sometimes, we will save that 1 person because he/she is our kin. Sometimes, we save the other 5. Sometimes, we try to save all and fail. And more often than not, we completely freeze in the moment, unable to do anything.
“But machines are deterministic!” they say and hence the ongoing debate around the trolley problem and autonomous cars. “Will you design your car to kill 1 or 5?” they ask. Engineers like me say and have said over and over that “I will design the car to do everything possible until the very last moment to kill NOBODY!” but the trolley arguments refuse to die. Yes, it is quite possible that just like in the human scenario of trying to save everyone but failing, the autonomous car scenario also ends up in killing all 6 (and possibly the car occupants too) but what other choice do the engineers have? We can’t design cars with a deterministic rule to prefer 5 lives over 1 or to judge a young life more valuable than old. We can only design the car to keep everyone safe. And if it fails even once in those billion kilometers, the best we can do is to use the incident data to further improve the autonomous systems/algorithms.
Funny thing is that among all this talk of ethics and morality of autonomous cars, they never talk about the asocial / antisocial things that humans regularly do – not helping a person lying dead on the side of the road, racing on public road in a fit of narcissism and causing accidents, littering on the streets, road rage etc. When I got my driver license in Germany, I was shown a video of a person lying on the side of the road needing help while other vehicles simply drove on. “How many people do you think stopped to help this poor person?” asked the instructor, a fireman and a first aid tutor. The answer was – just one person (a woman) in over an hour of filming. I was shocked and saddened to learn about this human cruelty or rather lack of kindness.
Autonomous cars will not be so heartless, if they are designed properly that is. With the connectivity technologies, the cars can already make emergency calls automatically on meeting an accident themselves and report a crash involving other vehicles. Of course, they can also be designed to stop on the side if they detect a person in need (though it would be neither easy to technically define what “need” means nor implement the detection part) but they still won’t be able to force the human occupants to help that human on the side of the road. Some things are “human” problems and not technical.