Danoff
Premium
- 34,036
- Mile High City
OK, I've got a two part scenario:
A self-driving car with five passengers is driving along and its brakes fail just before it needs to begin braking to not crash into the barrier ahead. The SDcar looks ahead on the other side of the road and see's that the "walk" light is on, but there is no one using the cross-walk.
Q1: should the SDcar veer over to the wrong side of the road?
You're asking whether the SD car can violate traffic laws to protect passengers. Like going over the posted speed limit to avoid being t-boned at an intersection, or swerving into the oncoming lane to avoid a deer, etc.
Yes. Traffic laws exist for safety, adhering to them at the expense of safety makes zero sense. They're also pre-crimes, where no injured party need be present for you to have committed a crime. The same is true of any driver. Traffic laws are breakable for safety purposes so long as safety can be assessed. Usually the defensive driving technique is NOT to swerve into the other lane to avoid the obstacle because you can't properly assess whether that maneuver is safe in time to make it, so you don't swerve out of concern that you'd be causing a worse accident. In the case of a perfect SDcar, it can assess everything and anything it needs to to determine that the maneuver is safe.
Morally, there is absolutely zero problem with violating traffic laws by moving out of your lane without signaling for the proper during prior to the maneuver, or crossing a double yellow on an abandoned street to avoid a moose, or any number of safety law violations in the name of safety.
Lets say that the SDcar decides to veer over to the other side of the road because there is no one in the cross-walk. Just after the SDcar veers, a kid chasing a ball begins running across the crosswalk.
Q2: should the SDcar take no additional action and run over the kid, or should the SDcar veer back to the original side of the road and crash into the barrier?
This is no different than the kid chasing a ball running out into the road, except that the only safe maneuver is to crash. The chosen course (chosen by the SDcar) suddenly has a kid in the way. The car can presumably only choose between hitting the kid and hitting a barrier.
Here's the point, morally. Once the SDcar is programmed to cause an accident in response to some circumstance (rather than simply fail to prevent one), the programming then takes moral responsibility for the outcome of that accident. If the car knows for sure that the impact with the barrier is 100% going to kill everyone in the car, and it decides to go for the barrier, the programmers have basically murdered the people in the car. They instructed the car to take an action that they knew would kill the passengers. Forget the reason (saving the kid), that's murder of innocent people. Now let's assume that the SDcar can assess that the impact with the barrier would not kill anyone to within 99.999999% probability. Suddenly, swerving into the barrier is not murder, it's property damage (which Google might comp you in favor of saving the kid's life).
Any time you get into a SDcar you assume some risk. It is possible, though counter intuitive, that hitting a barrier can represent a lower level of risk to your life than the ambient risk that you assume while driving. Let's say the SDcar determines that it can reduce the impact speed to 15 mph, and that since everyone is buckled, no one in the history of humanity who had their seat belt buckled in a car with airbags has ever been fatally killed at a speed that low. That's probably a lot of accidents, with zero fatalities. The assessed risk on that impact can be lower than the ambient risk of driving through the next intersection, which could have an unseen semi-truck ready to t-bone you and kill everyone inside. The risk of that happening can be higher than the risk of dying or even being injured in a known impact. So the SDcar may be improving your likelihood of survival by crashing you into a barricade vs. driving you through a green light.
No reasonable person would refuse to accept a 0.000001% chance of death in exchange for preventing a kid from having a 99.9% chance of death. However, morality can't be based only on what is reasonable.
Morally you can't assume that the person will take on any risk that they haven't told you they're willing to take on. So if the car can reduce your chance of death below ambient by crashing (which I outlined above) it can still swerve to avoid the kid (morally), and someone owes you a car and compensation for the hour of your time that it took before an uber got to you. In that circumstance, you agreed to that level of risk before you stepped foot in the car. In fact, the car can use the risk for the entire trip as a gauge of how much risk the rider has agreed to take. But the car can't autonomously assign you additional risk to save the kid's life, because it doesn't know how much risk you're willing to take.
On the otherhand,
It should ask you. When you get in the car it should ask you how much risk you're willing to take to save the lives of those around you and act accordingly. There's no reason that the car couldn't say "by pressing start, you agree to an X percent morality rate for the following trip", or "by pressing start, you agree to take on an additional X percent mortality rate to take precautions to protect others". Or you can even be presented with a dial that goes from suicidal on one side (my life is worth nothing, kill me first always) to narcissistic on the other side (I will take no risk for others).