When Casualties are Inevitable, Who Should Self-Driving Cars Save?

  • Thread starter Eh Team
  • 145 comments
  • 7,105 views
OK, I've got a two part scenario:

A self-driving car with five passengers is driving along and its brakes fail just before it needs to begin braking to not crash into the barrier ahead. The SDcar looks ahead on the other side of the road and see's that the "walk" light is on, but there is no one using the cross-walk.

Q1: should the SDcar veer over to the wrong side of the road?

You're asking whether the SD car can violate traffic laws to protect passengers. Like going over the posted speed limit to avoid being t-boned at an intersection, or swerving into the oncoming lane to avoid a deer, etc.

Yes. Traffic laws exist for safety, adhering to them at the expense of safety makes zero sense. They're also pre-crimes, where no injured party need be present for you to have committed a crime. The same is true of any driver. Traffic laws are breakable for safety purposes so long as safety can be assessed. Usually the defensive driving technique is NOT to swerve into the other lane to avoid the obstacle because you can't properly assess whether that maneuver is safe in time to make it, so you don't swerve out of concern that you'd be causing a worse accident. In the case of a perfect SDcar, it can assess everything and anything it needs to to determine that the maneuver is safe.

Morally, there is absolutely zero problem with violating traffic laws by moving out of your lane without signaling for the proper during prior to the maneuver, or crossing a double yellow on an abandoned street to avoid a moose, or any number of safety law violations in the name of safety.

Lets say that the SDcar decides to veer over to the other side of the road because there is no one in the cross-walk. Just after the SDcar veers, a kid chasing a ball begins running across the crosswalk.

Q2: should the SDcar take no additional action and run over the kid, or should the SDcar veer back to the original side of the road and crash into the barrier?

This is no different than the kid chasing a ball running out into the road, except that the only safe maneuver is to crash. The chosen course (chosen by the SDcar) suddenly has a kid in the way. The car can presumably only choose between hitting the kid and hitting a barrier.

Here's the point, morally. Once the SDcar is programmed to cause an accident in response to some circumstance (rather than simply fail to prevent one), the programming then takes moral responsibility for the outcome of that accident. If the car knows for sure that the impact with the barrier is 100% going to kill everyone in the car, and it decides to go for the barrier, the programmers have basically murdered the people in the car. They instructed the car to take an action that they knew would kill the passengers. Forget the reason (saving the kid), that's murder of innocent people. Now let's assume that the SDcar can assess that the impact with the barrier would not kill anyone to within 99.999999% probability. Suddenly, swerving into the barrier is not murder, it's property damage (which Google might comp you in favor of saving the kid's life).

Any time you get into a SDcar you assume some risk. It is possible, though counter intuitive, that hitting a barrier can represent a lower level of risk to your life than the ambient risk that you assume while driving. Let's say the SDcar determines that it can reduce the impact speed to 15 mph, and that since everyone is buckled, no one in the history of humanity who had their seat belt buckled in a car with airbags has ever been fatally killed at a speed that low. That's probably a lot of accidents, with zero fatalities. The assessed risk on that impact can be lower than the ambient risk of driving through the next intersection, which could have an unseen semi-truck ready to t-bone you and kill everyone inside. The risk of that happening can be higher than the risk of dying or even being injured in a known impact. So the SDcar may be improving your likelihood of survival by crashing you into a barricade vs. driving you through a green light.

No reasonable person would refuse to accept a 0.000001% chance of death in exchange for preventing a kid from having a 99.9% chance of death. However, morality can't be based only on what is reasonable.

Morally you can't assume that the person will take on any risk that they haven't told you they're willing to take on. So if the car can reduce your chance of death below ambient by crashing (which I outlined above) it can still swerve to avoid the kid (morally), and someone owes you a car and compensation for the hour of your time that it took before an uber got to you. In that circumstance, you agreed to that level of risk before you stepped foot in the car. In fact, the car can use the risk for the entire trip as a gauge of how much risk the rider has agreed to take. But the car can't autonomously assign you additional risk to save the kid's life, because it doesn't know how much risk you're willing to take.

On the otherhand,

It should ask you. When you get in the car it should ask you how much risk you're willing to take to save the lives of those around you and act accordingly. There's no reason that the car couldn't say "by pressing start, you agree to an X percent morality rate for the following trip", or "by pressing start, you agree to take on an additional X percent mortality rate to take precautions to protect others". Or you can even be presented with a dial that goes from suicidal on one side (my life is worth nothing, kill me first always) to narcissistic on the other side (I will take no risk for others).
 
The day that self-driving cars become a regularity is when I apply for my helicopter pilot's license. I am not sharing the road with imperfect machines that are highly accident prone. And trust me, nobody will never find a way to perfect self-driving cars.

Humans are error prone but automated automobiles are bound to screw up in situations where a human normally won't, and that means in situations with more potential casualties.

Let's say that a sensor goes bad while the car is going 30 mph in the city boulevard, towards a red light at a crosswalk. It's 5:00 PM and many people are crossing the streets. You can kiss a few innocent lives goodbye in that case.
 
Let's say that a sensor goes bad while the car is going 30 mph in the city boulevard, towards a red light at a crosswalk. It's 5:00 PM and many people are crossing the streets. You can kiss a few innocent lives goodbye in that case.

Sounds like a human driver.
 
Humans are error prone but automated automobiles are bound to screw up in situations where a human normally won't, and that means in situations with more potential casualties.

Like a human not stopping at a 4 way stop sign T boning someone
Like a human using a phone whilst driving and hitting someone
Like a human driving drunk
Like a human driving unlicensed
Like a human driving an defective car

Let's say that a sensor goes bad while the car is going 30 mph in the city boulevard, towards a red light at a crosswalk. It's 5:00 PM and many people are crossing the streets. You can kiss a few innocent lives goodbye in that case.

In the event of a sensor failure a failsafe would kick in like a back up sensor or the car will come to a stop and then hand control to the human driver.
 
Let's say that a sensor goes bad while the car is going 30 mph in the city boulevard, towards a red light at a crosswalk.

In other words: "Oh My God, the picture Jenny tweeted was so hilarious! Bri, you HAVE GOT to look at this!"

-

There's a reason genuine sudden unintended acceleration is so rare. Because software has internal self-consistency checks and sensor checks to make sure that everything is hunky dory. This is also why your engine does not blow up the minute a camshaft or crankshaft position sensor goes bad. And why it takes a long time for a vacuum leak or a mis-reading MAF sensor to do any damage to an Engine (O2 sensor and MAP sensor cross-checking and back-up)

And what does the Engine do when a sensor is not returning any information to it?

That's right. It stops. Dead. A self-driving car should do the same.

-

There will, eventually, be cases where people are killed in self-driving cars. It has, in fact, happened already, but that's because of people mucking about with it in unintended ways. But there are people already being killed every day by morons behind the wheel. On a much greater scale. On balance, I don't think it's a change for the worse.

-

Possibly the best policy is to design a battery of tests purposely designed to confuse onboard sensors, to determine the fitness of each self-driving system, its sensors and algorithms. And to limit self-driving to certain areas and situations while more data is gathered. But Pandora's Box is open now, and whether we like it or not, they're here, and we'll be driving alongside them in the decades to come.
 
Last edited:
The day that self-driving cars become a regularity is when I apply for my helicopter pilot's license. I am not sharing the road with imperfect machines that are highly accident prone. And trust me, nobody will never find a way to perfect self-driving cars.

You already share the road with humans. Self driving cars will be less prone to error then any human every could be since computers can't get distracted or have emotions on the road.

I will never understand the hate or fear of self driving cars. People who have no interest in driving or paying attention while driving will flock to them and it will be safer for everyone. Even if self driving cars decrease on road incidents by 5%, it'll be a win.
 
This is also why your engine does not blow up the minute a camshaft or crankshaft position sensor goes bad.
Nope it just plain stops, because when the crankshaft sensor does not communicate with the ECM, it has no idea when to fire or open the injectors and does nothing just like turning the key to run and not starting it.

And why it takes a long time for a vacuum leak or a mis-reading MAF sensor to do any damage to an Engine (O2 sensor and MAP sensor cross-checking and back-up)
The default settings, aka "limp mode" not the optimal settings for all driving conditions.
Running to rich over time will wear the bearings and valves haveing to potentially compress a liquid. The excessive heat from running lean can do countless things to an engine overtime.

And what does the Engine do when a sensor is not returning any information to it?

That's right. It stops. Dead. A self-driving car should do the same.
I beg to differ. There are a few and I mean a few codes out of the potential 1000+ codes in modern cars that will actually stop it from starting or suddenly stop. And by that I mean "open circuit" codes. And the most common is the crankshaft sensor as mentioned above.
I worked on a car that had 43 codes for 9 sensors. It ran like crap, but it ran.
My work van has codes for 2 emission sensors(EVAP and secondary air induction). It runs fine, sucks a lot of gas and knocks when the light comes on though. Bought a $50 code scanner and just delete the codes, wa-la back to normal.

Yes I know it won't fix it or help anything but all I want to do is finish this year. My business has been good this year even though my van has netted me some huge losses this year, she is 20 and I can finally afford a brand new one, on a prayer this van can finish my contracts for the year.*knocks on wood*

Anyways. I don't think having self driving cars on the road is a good idea. Now before you quote this sentence only...
While we have idiots that can't get their phone out of their face for the amount of time needed to go from point A to point B and the vehicles can't communicate with each other. We will continue to have accidents.
Now if all vehicles were SD and communicated with each other, it could also cut down on pedestrian accidents.
Perfect example is the poor guy that gets hit changing a hit tire on the side of the road. If the cars communicated it could alert them to the incident and bring the surrounding vehicles to a stop or slow speed for a coordinated avoidance of the incident, hell they don't even have to be SD they could implement it now with a the new drive by wire technology now.
 
Nope it just plain stops, because when the crankshaft sensor does not communicate with the ECM, it has no idea when to fire or open the injectors and does nothing just like turning the key to run and not starting it.

The default settings, aka "limp mode" not the optimal settings for all driving conditions.
Running to rich over time will wear the bearings and valves haveing to potentially compress a liquid. The excessive heat from running lean can do countless things to an engine overtime.

When the software does not know what to do, it does what it knows is reasonably safe. In the case of the crank or cam sensors, it knows that it isn't safe to continue, as it can't guess to any degree of accuracy when to squirt fuel or when to fire a spark, so it stops. If it's a dead MAF, or an EVAP or O2 problem, it knows that it's reasonably safe to follow a conservatively rich baseline map, so it does that. And it'll keep running until the engine blows or you fix it. At that point, it is not the fault of the software if you ignore the codes!

Modern cars are more sensitive. Some won't even run if they can't ping certain body modules or sub-components. Makes swapping engines a pain. And brings new meaning to "the blue screen of death".

-

The point being: when sensors or hardware fails, properly written software should compensate. Granted, you have that one-in-a-million code that's improperly written to detect a bit flip and an erroneous continuous signal, but most control software is more robust than that.
 
Let's say that a sensor goes bad while the car is going 30 mph in the city boulevard, towards a red light at a crosswalk. It's 5:00 PM and many people are crossing the streets. You can kiss a few innocent lives goodbye in that case.

There has to be a particular presumption made for that to be considered likely... that automated cars will each be individually reacting to all events as they approach pedestrian crossings.

That wouldn't be a very clever system - we're far more likely to see that the pedestrian crossing has broadcast its status (and other properties) to the network and that the automated cars will be preparing to slow/stop on approach. As other users have noted (particularly @niky) single-sensor failures don't have to be fatal to the machine or the local operating area and the fail-safe is a complete shutdown.

Modern cars are more sensitive. Some won't even run if they can't ping certain body modules or sub-components. Makes swapping engines a pain. And brings new meaning to "the blue screen of death".

Makes me think of the old "if Microsoft/Apple made cars" jokes... you'd have a crash every ten minutes and just start again.
 
You're asking whether the SD car can violate traffic laws to protect passengers. Like going over the posted speed limit to avoid being t-boned at an intersection, or swerving into the oncoming lane to avoid a deer, etc.

Yes. Traffic laws exist for safety, adhering to them at the expense of safety makes zero sense. They're also pre-crimes, where no injured party need be present for you to have committed a crime. The same is true of any driver. Traffic laws are breakable for safety purposes so long as safety can be assessed. Usually the defensive driving technique is NOT to swerve into the other lane to avoid the obstacle because you can't properly assess whether that maneuver is safe in time to make it, so you don't swerve out of concern that you'd be causing a worse accident. In the case of a perfect SDcar, it can assess everything and anything it needs to to determine that the maneuver is safe.

Morally, there is absolutely zero problem with violating traffic laws by moving out of your lane without signaling for the proper during prior to the maneuver, or crossing a double yellow on an abandoned street to avoid a moose, or any number of safety law violations in the name of safety.



This is no different than the kid chasing a ball running out into the road, except that the only safe maneuver is to crash. The chosen course (chosen by the SDcar) suddenly has a kid in the way. The car can presumably only choose between hitting the kid and hitting a barrier.

Here's the point, morally. Once the SDcar is programmed to cause an accident in response to some circumstance (rather than simply fail to prevent one), the programming then takes moral responsibility for the outcome of that accident. If the car knows for sure that the impact with the barrier is 100% going to kill everyone in the car, and it decides to go for the barrier, the programmers have basically murdered the people in the car. They instructed the car to take an action that they knew would kill the passengers. Forget the reason (saving the kid), that's murder of innocent people. Now let's assume that the SDcar can assess that the impact with the barrier would not kill anyone to within 99.999999% probability. Suddenly, swerving into the barrier is not murder, it's property damage (which Google might comp you in favor of saving the kid's life).

Any time you get into a SDcar you assume some risk. It is possible, though counter intuitive, that hitting a barrier can represent a lower level of risk to your life than the ambient risk that you assume while driving. Let's say the SDcar determines that it can reduce the impact speed to 15 mph, and that since everyone is buckled, no one in the history of humanity who had their seat belt buckled in a car with airbags has ever been fatally killed at a speed that low. That's probably a lot of accidents, with zero fatalities. The assessed risk on that impact can be lower than the ambient risk of driving through the next intersection, which could have an unseen semi-truck ready to t-bone you and kill everyone inside. The risk of that happening can be higher than the risk of dying or even being injured in a known impact. So the SDcar may be improving your likelihood of survival by crashing you into a barricade vs. driving you through a green light.

No reasonable person would refuse to accept a 0.000001% chance of death in exchange for preventing a kid from having a 99.9% chance of death. However, morality can't be based only on what is reasonable.

Morally you can't assume that the person will take on any risk that they haven't told you they're willing to take on. So if the car can reduce your chance of death below ambient by crashing (which I outlined above) it can still swerve to avoid the kid (morally), and someone owes you a car and compensation for the hour of your time that it took before an uber got to you. In that circumstance, you agreed to that level of risk before you stepped foot in the car. In fact, the car can use the risk for the entire trip as a gauge of how much risk the rider has agreed to take. But the car can't autonomously assign you additional risk to save the kid's life, because it doesn't know how much risk you're willing to take.

On the otherhand,

It should ask you. When you get in the car it should ask you how much risk you're willing to take to save the lives of those around you and act accordingly. There's no reason that the car couldn't say "by pressing start, you agree to an X percent morality rate for the following trip", or "by pressing start, you agree to take on an additional X percent mortality rate to take precautions to protect others". Or you can even be presented with a dial that goes from suicidal on one side (my life is worth nothing, kill me first always) to narcissistic on the other side (I will take no risk for others).


Just to dovetail one more thought onto this.

If you tell a SD car that it can't increase your risk above what you've prescribed the route to be, or what you've agreed that the route can be, it might not change routes for you. It can assess that one route has a hire mortality rate than another, and stick to your traffic-clogged route over taking a much faster higher mortality route. Imagine for a moment that the SD evaluates a tiny mortality rate for surface streets where the speed limit never goes over 45 mph, and a higher mortality rate for the freeway where the speed limit hits 65. If you've agreed to the surface streets, it may not put you on the freeway because it's programmed not to increase your risk. In fact, that might even be your intention. You might be deathly afraid of driving and insist on surface streets.

This highlights the overall need for some statistical space to play in. The passenger has to agree to operate within a certain mortality statistic, and give the car some room to play within those confines. Otherwise it may simply assess that the safest thing to do is not to take you to your destination, ever.
 
Lost commute time? I don't even know what that means. You're so important right now you need to stare at your phone? Your drive to work or home must stink. That's a freedom and a priveledge. Try walking, biking or taking a bus.

Can I assume you stare at your phone while you drive like maybe 75% of the people I see?

Those people need to get a life in my opinion.
Pretty judgemental of people who have different occupations and lifestyles than you.

I drive an hour to and from work, every day. That's two hours of time driving, that I could be doing other things....such as checking email, doing my taxes, doing other book keeping, etc.

I also run my own business, and spend anywhere from an hour to 3 hours on the phone with clients some days.

When I'm on a job site, I need to get work done...time on the phone is wasted time. I could continue to do the extra work in the evenings and on weekends, but I don't exactly enjoy doing book keeping. I would much rather spend that time doing things I enjoy.

So, if I had that 10hrs a week I spend driving to do other work related things, I'd have much more free time in the evenings and weekends.

But I guess I'm lazy, so I don't deserve the extra free time, or something. Could probably solve my problems if I loaded the work truck worth of tools on my back and walked to work.
 
A self driving truck would do wonders for me as well. When you're in the emergency service business, minutes count. Although most of my business is repeat or referral, I sometimes can't return a new customer enquiry quickly enough because I'm driving and don't want to touch my phone and by the time I pull over and make the call, someone else has called the customer back before me and I've lost business. The ability to do something else while the vehicle is moving would increase my business and shorten my work week.
 
A self driving truck would do wonders for me as well. When you're in the emergency service business, minutes count. Although most of my business is repeat or referral, I sometimes can't return a new customer enquiry quickly enough because I'm driving and don't want to touch my phone and by the time I pull over and make the call, someone else has called the customer back before me and I've lost business. The ability to do something else while the vehicle is moving would increase my business and shorten my work week.
Have you tried a blue tooth head set?
Works fine unless I get to far from my truck where my phone usually is.
Skull Candy makes a good set, reasonably priced too.
 
Have you tried a blue tooth head set?
Works fine unless I get to far from my truck where my phone usually is.
Skull Candy makes a good set, reasonably priced too.
I have blue tooth headset but I drive around in an older Canadian city of 200,000 with a lot of fairly tight two lane roads. A lot of tight intersections, lots of parking on the curb lanes etc. You have to be on your toes constantly here watching for other drivers not paying attention. I can't dial the phone or be reading through service call emails while I'm driving so I have to pull over which slows me down. I do use the headset to talk but I'm leery of doing so in heavy, tight traffic.
 
The day that self-driving cars become a regularity is when I apply for my helicopter pilot's license. I am not sharing the road with imperfect machines that are highly accident prone. And trust me, nobody will never find a way to perfect self-driving cars.

Humans are error prone but automated automobiles are bound to screw up in situations where a human normally won't, and that means in situations with more potential casualties.

Let's say that a sensor goes bad while the car is going 30 mph in the city boulevard, towards a red light at a crosswalk. It's 5:00 PM and many people are crossing the streets. You can kiss a few innocent lives goodbye in that case.

 


It goes without saying that driverless cars will need to be networked into the traffic signal grid in the future.

A driverless car must operate from the data available to it. Which means it can be affected by line of sight issues (following a truck) or glare disguising a traffic signal or any other of a dozen things that can prevent a human driver from seeing a stoplight.

A human driver, of course, will learn to read the road and will slow down automatically when it sees cars stopped at a crosswalk... just in case... but most don't, anyway.
 
It goes without saying that driverless cars will need to be networked into the traffic signal grid in the future.

A driverless car must operate from the data available to it. Which means it can be affected by line of sight issues (following a truck) or glare disguising a traffic signal or any other of a dozen things that can prevent a human driver from seeing a stoplight.

I think the presumption is definitely that the cars will rely on traffic grid control data (and vice versa). A queued car could conceivably save at least 5 seconds at a traffic signal (if indeed the chances of having to stop are as high). Multiplied over the number of daily car journeys the savings in journey time could be immense. It's hard to imagine control being so easy if the cars rely on "sight" alone.
 
I think the presumption is definitely that the cars will rely on traffic grid control data (and vice versa). A queued car could conceivably save at least 5 seconds at a traffic signal (if indeed the chances of having to stop are as high). Multiplied over the number of daily car journeys the savings in journey time could be immense. It's hard to imagine control being so easy if the cars rely on "sight" alone.

They wouldn't just save time, they'd save a lot of fuel - especially if the light was broadcasting upcoming timing instead of just the current state - and smooth out the ride for passengers as well.
 
Unfortunately, I feel like smart stop lights would be incredibly expensive and be funded at the cost of additional fuel tax.

However, in theory they sound like a fantastic idea.
 
Unfortunately, I feel like smart stop lights would be incredibly expensive and be funded at the cost of additional fuel tax.

However, in theory they sound like a fantastic idea.

A quick google search suggests that they get overhauled once every 20 years or so (including replacement of controlers, lamps, polls, etc.). That would mean that a 20-light town is doing a traffic light overhaul just about every year.

google
As of January 2006, there were 11,871 signalized intersections Citywide, including2,795 in Manhattan, 4,100 in Brooklyn, 2,942 in Queens, 1,536 in the Bronx, and 500 in Staten Island. Q: How many traffic signals are there in New York City?

If NYC has 12,000 traffic lights which each last 20 years. On average, you're replacing 12 per week.
 
Unfortunately, I feel like smart stop lights would be incredibly expensive and be funded at the cost of additional fuel tax.

However, in theory they sound like a fantastic idea.

The controllers are already "smart" - the controllers react to traffic information around them. Even now on my way home (after lighting-up time) I'm able to turn most of the junctions to green with my headlights before I get there. Adding networking to those control boxes would be relatively easy and nothing else on the junction needs to change. If anything you'd be swapping one type of sensing (particularly expensive for under-road copper loops) for another.
 
Even now on my way home (after lighting-up time)

You should probably not drive that way.

967a8ddcf06cddfbb7d0d1ad3277d2fb.500x356x11.jpg
 
Unfortunately, I feel like smart stop lights would be incredibly expensive and be funded at the cost of additional fuel tax.

I doubt that adding a short range broadcasting unit to each light adds a lot of cost. You could basically break a cheap cellphone in half and put that in. Less than $100, definitely.

As for fuel tax, that seems tough if everyone is driving electric cars by that point.
 
I don’t know how they work, but could they modify the sensors they use for emergency vehicles to communicate with autonomous vehicles?
 
Back