Where does philosophy intersect with self-driving cars?

Have you ever heard of the “the trolley dilemma?” It’s a really annoying little philosophical conundrum. Basically, there’s two trolley tracks, one with five people tied to it, and one with one person tied to it. The trolley is barreling down the first one at high speed, and if left alone, it’ll run the five people over. If you were to pull a nearby lever though, the track would change, sparring the five people but dooming the one person. Conundrums like this annoy me because they’re really pessimistic. What’s even more annoying, though, is that some folks have used this dilemma to warn against self-driving cars.

Credit: Car and Driver

The line of thought, I assume, is that if a self-driving car were barreling down a road with a bunch of trapped people, and the only way to save them would be if the on-board AI decided to swerve into oncoming traffic, someone would get hurt either way. Now, I’ll be the first to admit that the on-board AI for driving cars is nowhere near commercially viable yet, but in a situation like that, wouldn’t the car just, y’know… stop? Then no one gets hurt. I’m pretty sure an AI could handle that. Yeah, you could argue something like “what if the brakes were cut?” But in that situation, a human driver would be just as likely to hurt someone as an AI driver.

Credit: Smith Collection/Gado/Getty Images

Some folks seem to have this notion that in a crunch moment, an AI driver would make an unethical, yet logical decision. But I have to ask, what exactly is logical about barreling into innocent bystanders? The most logical decision would be to stop! I’m sure there’s at least one potentially problematic decision a self-driving car could make, but I sure can’t think of one off the top of my head. There’s plenty of problems that still need to be worked out with self-driving cars, but let’s not let ourselves be derailed by ancient thought experiments.