When Casualties are Inevitable, Who Should Self-Driving Cars Save?

  • Thread starter Eh Team
  • 145 comments
  • 7,099 views
Disregarding this specific "backup driver" case, who is criminally responsible when an autonomous car runs over or crashes into a pedestrian?

If you're busy doing your work on your commute to the office, or if you're blind drunk being driven home, are you to be prosecuted if you hit someone, if it can be proven that the car is at fault and not the pedestrian?
The law's playing catchup with technology, and blame is likely to shift as the technology garners more acceptance on the legislative level.

At the current level of acceptance that results in companies looking for places that won't prohibit testing, the technology is seen more as an assist; that is to say something that should not be relied upon and requires a backup driver. Better technology that garners more acceptance may be seen as something that can be relied upon, and therefor may be deemed responsible in the event of failure that results in harm and/or damage.

Right now, I suspect all culpability is on the inattentive rather than the flawed technology, because it was misplaced reliance on a flawed technology that resulted in death rather than the flawed technology itself.


Regarding this specific "backup driver" case, the state of Arizona's lax regulation leading to all this testing might see him off the hook.
Someone's not doing their job (of which there exists proof)--a job that exists to ensure some degree of safety--and that negligence resulted in death.

I suspect that this incident taking place amid a lack of regulation will lead to more regulation.
 
Disregarding this specific "backup driver" case, who is criminally responsible when an autonomous car runs over or crashes into a pedestrian?

In the UK the person "in charge" of the vehicle would be subject to a criminal investigation as is the case in any fatal accident. Simply having the keys in one's pocket can constitute being "in charge" for some purposes. The results of the investigation would then be passed to the CPS if the police felt that there was any criminal action undertaken by the person in charge (misoperation, intoxication, carelessness, intoxication etc.).

People are killed crossing roads every day and it's often because they do so in stupid ways or in stupid places. On the face of it the Sheriff's account makes it sound as though this accident was sadly unavoidable. I'd therefore expect that the person "in charge" wouldn't be recommended for prosecution.
 
Interesting video. I know that cameras generally don't have the same sensitivity as the human eye at night (and presumably the sensors on the car), but I can't really see her in the video until she steps into the car's lane and the headlights. At that point it's far too late to stop. Slow down maybe, and if she was a ninja she might have been able to dash out of the way. But as long as she keeps calmly strolling, she's getting hit regardless.

More interesting, the victim doesn't have time to look at the car either. Frankly, if she doesn't have time to look, the car doesn't have time to stop. I would have expected the car to make at least some attempt to brake (especially as an autonomous car), but she wasn't making it out of that situation unscathed. She walked out of the dark into the path of a car along a poorly lit section of road where one would not expect a pedestrian to be crossing. The car didn't see her, and she didn't see the car, and if she doesn't notice the brightly lit vehicle then one can hardly expect the car to notice the set of drab clothes strolling through the dark.

There was a case in my town when I was a kid where a guy got drunk one evening and fell asleep in the middle of a fairly lightly used piece of road. Similar sort of poor lighting and limited expectation of people on the road. A car hit him and he died. The driver was not charged, although they were obviously shaken up for months afterwards. In a generic sense, I don't see this as much different. I can see how a driver could very reasonably have not seen or expected her, and not been able to stop once it was obvious she was there. If I'm a train driver, I'm not liable for people committing suicide by train.

Yet interestinger, the "driver" clearly wasn't making any attempt to do his job. Here's where it gets odd. He hasn't made a reasonable attempt to avoid an accident. The accident was possibly largely unavoidable anyway, but that's sort of beside the point. He's at very least he's operating the vehicle recklessly, in my opinion. If he was actually driving and paying attention, I'd say there's a reasonable chance it's ruled a tragic accident without fault. With him sitting playing with his phone looking up every few seconds, that seems irresponsible.
 
Interesting thread.

Disregarding this specific "backup driver" case, who is criminally responsible when an autonomous car runs over or crashes into a pedestrian?

If you're busy doing your work on your commute to the office, or if you're blind drunk being driven home, are you to be prosecuted if you hit someone, if it can be proven that the car is at fault and not the pedestrian?

I wonder how much insurance companies and solicitors are rubbing their hands with glee...

Regarding this specific "backup driver" case, the state of Arizona's lax regulation leading to all this testing might see him off the hook. Other states will have to make their minds up on new laws one day.

I'm not sure how the laws are in other parts of the world, but I don't believe level 4 autonomy (which the car can operate without human input) is actually legal in the US. Only level 3 is, which states a driver must be ready to take over driving duties at any time. So given that, I think until level 4 becomes legal, it seems like the driver should still be held accountable when an accident occurs.

=
 
Interesting video. I know that cameras generally don't have the same sensitivity as the human eye at night (and presumably the sensors on the car), but I can't really see her in the video until she steps into the car's lane and the headlights. At that point it's far too late to stop. Slow down maybe, and if she was a ninja she might have been able to dash out of the way. But as long as she keeps calmly strolling, she's getting hit regardless.

An unsophisticated camera with probably an incorrect exposure setting being replayed on a limited-dynamic-range screen is not going to convey what a human would have been able to see in this situation. I'm not saying it would have improved visibility dramatically, but I think a human driver paying attention could have prevented this. The whole point of Lidar is that it's supposed to see irrespective of light conditions. It's literally shooting beams of lasers out at the world and building 3D models of it's environment in real time. I see these Volvos driving around quite often, and usually the passenger (if there is one) has a laptop on which you can actually see the point cloud mapping. Also, where is the redundancy? If the system is being built without several layers of redundancy, there is a serious problem (if Lidar fails, there should be a nightvision optical camera backup, if that fails there should a sonar/radar backup, if that fails there should be an ultrasonic backup, etc, and all of these should be communicating with each other to verify each sensor's calibration and performance). This appears to me to be a colossal failure of sensor integration & control systems design. Regardless of how the tech is implemented, if an autonomous vehicle cannot detect this person in this situation, I honestly don't see how its generally viable in the time frame outlined by the hype machine.

More interesting, the victim doesn't have time to look at the car either. Frankly, if she doesn't have time to look, the car doesn't have time to stop. I would have expected the car to make at least some attempt to brake (especially as an autonomous car), but she wasn't making it out of that situation unscathed. She walked out of the dark into the path of a car along a poorly lit section of road where one would not expect a pedestrian to be crossing. The car didn't see her, and she didn't see the car, and if she doesn't notice the brightly lit vehicle then one can hardly expect the car to notice the set of drab clothes strolling through the dark.

The victim was almost certainly blasted out of her mind on heroin or meth. I see these "slowly walk the bicycle across the road" types quite often in the Bay Area. They simply don't care that a vehicle is coming, let alone proceed with anything resembling caution.

There was a case in my town when I was a kid where a guy got drunk one evening and fell asleep in the middle of a fairly lightly used piece of road. Similar sort of poor lighting and limited expectation of people on the road. A car hit him and he died. The driver was not charged, although they were obviously shaken up for months afterwards. In a generic sense, I don't see this as much different. I can see how a driver could very reasonably have not seen or expected her, and not been able to stop once it was obvious she was there. If I'm a train driver, I'm not liable for people committing suicide by train.

Yet interestinger, the "driver" clearly wasn't making any attempt to do his job. Here's where it gets odd. He hasn't made a reasonable attempt to avoid an accident. The accident was possibly largely unavoidable anyway, but that's sort of beside the point. He's at very least he's operating the vehicle recklessly, in my opinion. If he was actually driving and paying attention, I'd say there's a reasonable chance it's ruled a tragic accident without fault. With him sitting playing with his phone looking up every few seconds, that seems irresponsible.

This is a separate, related issue. You try sitting behind the wheel of a car that's driving itself for probably more than 2 hours, fully expecting it to perform perfectly. Just about anyone would space out. Its remarkable the guy wasn't straight up asleep. I don't think anything short of full automation (Level 4+) will be "safe" (as seen here, a driver cant realistically be expected to pay attention/be ready to immediately take control if they aren't actively engaged, unless they are pumped up with adderall or something) and I don't think 4 & 5 are even possible.

But I guess we'll see.
 
Last edited:
And this is why I said I'd have to see the video before taking the sheriff's word. The sheriff is looking at this as a human being and in human terms she did seem to come out of nowhere but the crucial part is she didn't come out of nowhere and shouldn't be invisible to the various systems in place to detect the presence of various objects and living things. It's a straight up fail of the detection system. Back to the drawing board while the inevitable massive lawsuit proceeds.
 
And this is why I said I'd have to see the video before taking the sheriff's word. The sheriff is looking at this as a human being and in human terms she did seem to come out of nowhere but the crucial part is she didn't come out of nowhere and shouldn't be invisible to the various systems in place to detect the presence of various objects and living things. It's a straight up fail of the detection system. Back to the drawing board while the inevitable massive lawsuit proceeds.

Agreed. I'm curious to see the outcome of this. If Uber is held liable (thereby establishing a legal precedent) I could see the entire business model of driverless cars being put into question. A lot of big bets have been placed on the driverless car revolution....the stock market is looking pretty shaky as it is....

If Uber is not held liable, literally nothing matters anymore.
 
The victim was almost certainly blasted out of her mind on heroin or meth.
Classy.

You try sitting behind the wheel of a car that's driving itself for probably more than 2 hours, fully expecting it to perform perfectly.
A clearly misguided expectation on the part of someone whose job it was to act in the absence of functional technology. This isn't proven technology failing in a world of absolute acceptance. This is a testing scenario.
 
Classy.


A clearly misguided expectation on the part of someone whose job it was to act in the absence of functional technology. This isn't proven technology failing in a world of absolute acceptance. This is a testing scenario.

Absolutely. Doesn't mean it wasn't true in this situation (and probably many more). And I'm not ragging on the woman for being blasted, just observing that she probably was. What other reasonable explanation could be given for her action?
 
What other reasonable explanation could be given for her action?

Stupidity?

I see it all the time in Salt Lake, people just walk out into the road without looking or really caring because they think people will - and should - stop for them. If you honk at them, more often than not, they get bent out of shape about it and sometimes even try to get confrontational. Same goes for people on bikes that think they can blow red lights for the hell of it.
 
And I'm not ragging on the woman for being blasted, just observing that she probably was.
To what end? What purpose does pointing out this assumption (if it's anything other than that, please point me to the autopsy report) serve if not to shift blame? "Did you hear about that guy who died in a fire caused by faulty wiring in the apartment he was renting?" "Yeah, he was probably passed out drunk on the sofa."

What other reasonable explanation could be given for her action?
Good old-fashioned apathy; I see it all the time from all sorts of people who may or may not be gakked out on illicit, mind-altering substances.

People lull themselves into a false sense of safety, thinking that because they have the right-of-way when using a crosswalk, with further support from a light signaled in their favor, they are also permitted to cross in the absence of these conditions. They do not, in fact, have the right-of-way, but because people who are actually paying attention in those moments still slow or stop to let them cross, they think they do. This happens often enough that it becomes second-nature, and they don't even consider low visibility or unusually slick road surfaces caused by a recent rain after a long period without.
 
To what end? What purpose does pointing out this assumption (if it's anything other than that, please point me to the autopsy report) serve if not to shift blame? "Did you hear about that guy who died in a fire caused by faulty wiring in the apartment he was renting?" "Yeah, he was probably passed out drunk on the sofa."

If you read my post, I'm clearly stating that it is not her fault., but that a well-functioning driverless car should be able to detect/avoid such a situation.

Good old-fashioned apathy; I see it all the time from all sorts of people who may or may not be gakked out on illicit, mind-altering substances.

So the other two possible explanations given are stupidity & apathy. That's somehow more "classy" than assuming she was high? At least I'm giving her the benefit of the doubt that her actions were caused by an external influence. She didn't even look at the 2-ton object traveling at 40mph directly towards her. That doesn't seem like something a person with functioning senses (nothing to do with stupidity nor apathy) would do...unless they were under the influence of something...which is the entire reason I responded to that part of Imari's post.

People lull themselves into a false sense of safety, thinking that because they have the right-of-way when using a crosswalk, with further support from a light signaled in their favor, they are also permitted to cross in the absence of these conditions. They do not, in fact, have the right-of-way, but because people who are actually paying attention in those moments still slow or stop to let them cross, they think they do. This happens often enough that it becomes second-nature, and they don't even consider low visibility or unusually slick road surfaces caused by a recent rain after a long period without.

Ok.

Why she went into the street is totally beside the point of this thread. The fact is that she (and many others) do for a myriad of reasons. A driverless car should be adept at responding.
 
Last edited:
I'm clearly stating that it is not her fault.
Why she went into the street is totally beside the point of this thread.
And yet you believe your assumption she was gakked out is pertinent. Let's shelve the assumptions, particularly those that cast an unfavorable light on the victim, eh?

a well-functioning driverless car should be able to detect/avoid such a situation.
A driverless car should be adept at responding.
Indeed. It seems to me that such a thing reaches a desirable degree of functionality through testing and making adjustments in areas the testing reveals as being in need. If this testing is to potentially put lives at risk, measures should be employed to mitigate the risk.

The fact is that we just plain don't know if a human driver would have been able to see the victim because the human driver present wasn't paying attention to the road ahead.

Whether the human driver present was complacent due to the assumed capabilities of the vehicle being present is a bit of a non-issue as well, as it was their duty to intervene if necessary. It was necessary. Whether or not they could see the need to intervene doesn't really matter because they weren't looking when it was their job to do so.

If a faulty product leaves manufacturing without getting attention from someone whose job it is to check for faults and someone is harmed as a result, who is to blame? Not attention with no faults detected, mind, but no attention at all.
 
Out of interest do we know how the car stopped? Did it self-stop or did the "driver" take action?
Footage appears to show the individual in the Volvo reacting below the camera's line of sight, which seems to indicate the brakes being engaged and therefor the need to do so. Nothing I've read supports or refutes this.

I'd be interested in seeing more footage from the inside, after where the existing released footage cuts out.
 

Whether the human driver present was complacent due to the assumed capabilities of the vehicle being present is a bit of a non-issue as well, as it was their duty to intervene if necessary. It was necessary. Whether or not they could see the need to intervene doesn't really matter because they weren't looking when it was their job to do so.

If a faulty product leaves manufacturing without getting attention from someone whose job it is to check for faults and someone is harmed as a result, who is to blame? Not attention with no faults detected, mind, but no attention at all.


I would argue that is a huge issue, probably the most important, as it is part of the fundamental contradiction of partially-autonomous cars -- That is, the more automated the vehicle is, the less engaged the human driver is with controlling the vehicle, that by its very definition, still needs engagement/supervision/control. This person was, I assume, paid to be behind the wheel of the car. It's his job. Autonomous cars levels 1-3 all require that a human driver have some form of control/supervision of the vehicle. To me, the viability of level 3 automation is cast into doubt by this incident -- the technology is not sufficient as to not need supervision but humans don't have the attention span to provide that supervision, as is clearly demonstrated here. Now imagine the more pedestrian drivers among us, not being paid to supervise the driverless car, and very likely not as well versed on the capabilities and shortcomings of the automation. They are going to pay attention better than this guy who's only job it is is to pay attention? No way.

Again, I'm not saying this guy could have prevented this accident if he had been more alert. But the problem is that he clearly was not alert, and I don't think you can expect typical humans to behave any better in the same circumstance.
 
I've been an avid cyclist for 29 years, and in that time it has become glaringly--at times [literally] painfully--apparent that people don't need misplaced complacence in the presence of autonomous driving technology to not pay attention to the road. That inattention is at the core of this incident for me.

Now that isn't to say additional context would not be desirable or pertinent. For instance; was the individual merely being compensated for going about their normal routine in an autonomous vehicle or was their primary directive to be present during testing in various situations? The former makes a case for the general public not being ready for even level 3 autonomy--whatever that means--while the latter is more clearly criminal negligence.

The latter may also be a bit more damning to the company responsible for conducting the test (be it Uber or a third-party entity, though Uber appears to be responsible as it hired the individual), for not employing an individual more competent. It's been made known that the individual has a criminal record (indeed part of Uber's mission statement is to give people a second chance), and while the crimes are unrelated to driving, I'd personally think twice about hiring someone convicted of insider trading to watch my child (if I had one that needed to be watched) despite them being qualified to do so.

An interesting aspect of the incident--unrelated to the above--that has come to my attention:



The "super-weird" bit is what I'm referring to, not the proximity to a crosswalk.

Edited for spelling.
 
Last edited:

Now that isn't to say additional context would not be desirable or pertinent. For instance; was the individual merely being compensated for going about their normal routine in an autonomous vehicle or was their primary directive to be present during testing in various situations? The former makes a case for the general pubic not being ready for even level 3 autonomy--whatever that means--while the latter is more clearly criminal negligence.

Neither case is terribly good for the prospects of Level 1-3 autonomy. And I don't think the driver is going to escape without some form of liability. The Uber of old might have even thrown him under the bus, so to speak. The new CEO seems a little less maniacal.

And yes that urban planning is awful.
 
And this is why I said I'd have to see the video before taking the sheriff's word. The sheriff is looking at this as a human being and in human terms she did seem to come out of nowhere but the crucial part is she didn't come out of nowhere and shouldn't be invisible to the various systems in place to detect the presence of various objects and living things. It's a straight up fail of the detection system. Back to the drawing board while the inevitable massive lawsuit proceeds.

There's an interesting question there. Are we holding automated cars to a higher standard than human drivers? I mean, ultimately the goal is obviously that they would be superior to human drivers, but right now they're clearly not. But if an autonomous car has an accident in a situation that a human would also, is that something that should be legally actionable?

As a similar case, we have modern cars now that have limited autonomous action to help avoid an accident like the normal sensors on that Volvo. But someone having a crash in an XC90 is not held to a higher standard than me in my totally sensorless 1990 MX5. It may be desirable to hold full self-driving cars to a higher standard, but I feel like people are just assuming that without consideration.

If Uber is not held liable, literally nothing matters anymore.

Not exactly. It depends on the reasoning that is used to come to that decision. If they decide to hold autonomous cars to the same standard as human drivers, and demonstrate adequately that a human driver could not have reasonably expected to avoid that accident then I think Uber could get off while still establishing a legal precedence for what autonomous cars will and won't be liable for.
 
Not exactly. It depends on the reasoning that is used to come to that decision. If they decide to hold autonomous cars to the same standard as human drivers, and demonstrate adequately that a human driver could not have reasonably expected to avoid that accident then I think Uber could get off while still establishing a legal precedence for what autonomous cars will and won't be liable for.

It will be interesting to see what happens.

I could see a court/jury deciding that because the vehicle requires supervision and was not adequately being supervised, that the driver is at fault.

I could also see a court/jury deciding (depending on evidence...curious to see if they'll release the LIDAR imaging) that no supervision should have been required in this circumstance (this is a little dubious due to the classification of the autonomy) and that the fault was within the tech/programming. To me this is the most realistic description of what actually happened, even if it's not legally correct. Expanding on this, I don't think you would blame the driver, even if he wasn't paying attention, if the car suddenly veered off the road due to a programming/sensor error/fault.

Less likely I could see a court/jury actually place the blame on the pedestrian, but I doubt that will happen.
 
Y'all act like even though it autonomous it can stop on a dime...
She stepped right in front of the car...
Autonomous or not it can't instantly stop. As far as pedestrians it will never be perfect. It can only respond to its set parameters. I'll bet the car didn't even bother to recognize it since she was a pedestrian on a median, not in or near a crosswalk.
Until people can get their nose out of the phone, pay attention and actually follow the law. This is going to happen.
I saw Joey say a driver should stop for pedestrians. In GA we only have to stop for pedestrians in a crosswalk with a signal indicating they can cross.
I've purposely scared the crap out of a few and cussed out and informed the ones who think they own the road.
 
Last edited:
The problem is, as @Famine notes, that the radar detection systems should have seen her long before she became visible on the camera.

Dashcams do not have the same night vision resolution as the human eye. A human would have seen her maybe a split-second before the camera did.

But radar detection systems are not affected by ambient light, and she was already in the adjacent lane before she stepped into the pool of light, so should have been visible to the radar for maybe three seconds or more before the impact. At 40 mph, there is more than enough time for the car to slow to a stop. As seen, the system didn't slow down at all.

No clutter. No parked cars. No foliage to interfere with the detection. This one was a massive failure.
 
Last edited:
It's such a massive fail it seems like all the detection systems were not even activated.
 
There's an interesting question there. Are we holding automated cars to a higher standard than human drivers? I mean, ultimately the goal is obviously that they would be superior to human drivers, but right now they're clearly not. But if an autonomous car has an accident in a situation that a human would also, is that something that should be legally actionable?
I think we absolutely should hold autonomous cars to a higher standard. If we only hold them to the standard of an "average" driver, that effectively makes them a danger to half of the driving populace. If they're only better than disengaged and frequently inattentive drivers, they're a danger to drivers who are already engaged and attentive.

What's the point unless (or until) they're virtually flawless? Even if a fleet of autonomous cars can improve upon the national statistics of accidents/fatalities, is it worth it if the fatalities in those statistics involve un-human-like total screwups? What value is there in a computer's millisecond-scale perfect reactions if the computer may fail to act in the first place, or act erroneously?

If autonomous cars are to be the future they're hyped up to be, I think they must be better than anyone willing to climb into one, in just about any possible circumstance. Until then, I see this as no better than a few technological steps away from the old legend of setting the cruise control on an RV and walking away to grab a drink from the mini-fridge. If no human is paying attention, there's figuratively no one behind the wheel. Just a computer that has learned to recognize some patterns from its sensors, and is otherwise oblivious.

That's what makes an incident like this more tragic to me than a fatal accident caused by human negligence. It's still human negligence, with hubris and misplaced faith in technology.
 
I'm all for holding the technology to a higher standard...as a part of it becoming more accepted and, frankly, ubiquitous.

This is a test of the technology gone horribly awry, and we still know only a fraction of the details that led to it. As has been pointed out, the technology doesn't rely on the road ahead being illuminated, so whether the camera or "driver" could see doesn't really matter in this regard.

Of course the technology probably should be held to a higher standard even during testing, and this is where the lax regulations in Arizona come into play. Why are the regulations lax? Is it a lure for tech companies or is it a massive oversight? Are companies taking the bait or are they taking advantage of oversights?

The latter half of that second question may end up being Uber's undoing from a criminal conduct perspective if it ends up being the case, though if regulations are treated like contract guidelines, something similar to contra proferentem may apply, and the presence of ambiguity in the regulations should favor those taking advantage of the lack thereof, even if they do so knowingly; it's deplorable, but is it criminal?

There are some assumptions in the above, for sure, so I think any judgements ought to be withheld until all circumstances of the event are known.
 
I think we absolutely should hold autonomous cars to a higher standard. If we only hold them to the standard of an "average" driver, that effectively makes them a danger to half of the driving populace. If they're only better than disengaged and frequently inattentive drivers, they're a danger to drivers who are already engaged and attentive.

You're sort of ignoring that if they're as good as an average driver, then they're also better than half of the drivers on the road. You're not making the road more dangerous by adding cars that are as good as the average driver.

What's the point unless (or until) they're virtually flawless? Even if a fleet of autonomous cars can improve upon the national statistics of accidents/fatalities, is it worth it if the fatalities in those statistics involve un-human-like total screwups?

Well, that's the interesting question. Would you accept less road injuries and deaths if they happen in ways that are different to the way that humans typically cause injuries and deaths now? I would, but that's because I like people and I'd like to see less of them injured or killed.

What value is there in a computer's millisecond-scale perfect reactions if the computer may fail to act in the first place, or act erroneously?

Yes, let's not implement any technology until it's perfect. That's not a fallacy at all.

If autonomous cars are to be the future they're hyped up to be, I think they must be better than anyone willing to climb into one, in just about any possible circumstance.

The point is not to buy into the hype, and to actually use your own brain cells to consider what would be a useful level of technology to employ and what risks we might accept in order to use it.

Obviously you're one of those people who sees things as black or white with no in-between. I find it remarkable that you've made it 31 years without developing any idea of nuance.

Until then, I see this as no better than a few technological steps away from the old legend of setting the cruise control on an RV and walking away to grab a drink from the mini-fridge.

Of course. That's exactly what it's like. That's not hyperbolic at all.

I'm really hoping you come back and post that you were really drunk when you posted this and it's not exactly what you meant.

If no human is paying attention, there's figuratively no one behind the wheel. Just a computer that has learned to recognize some patterns from its sensors, and is otherwise oblivious.

I recommend that you never fly on a plane.

That's what makes an incident like this more tragic to me than a fatal accident caused by human negligence. It's still human negligence, with hubris and misplaced faith in technology.

Lol. Hubris and misplaced faith in a prototype car doing testing? The guy behind the wheel didn't do his job, but he was there precisely because the designers knew that a product in testing can have flaws.

It's about as hubristic as texting and driving.

I'm all for holding the technology to a higher standard...as a part of it becoming more accepted and, frankly, ubiquitous.

But why? We choose a standard of driving for the road not because that's the average driver skill, but because that's deemed to be the minimum safe level. That's what driver testing is for (and whether it succeeds is another discussion).

If there should be a higher standard for computers, what's the argument for not holding human drivers to that standard also? If that's what is required to be safe, I don't see why you'd let human drivers off the hook for performing poorly.
 
If only for the reason a number have cited as why this incident should not have occurred; the technology means the vehicle is capable of detecting far more than the human eye. If all other aspects of its functionality are on par with what is expected of human drivers, that singular advantage is justification for higher expectations and therefor standards. However, I suspect that the technology, once fully developed, will be capable of far more and thus held to even higher standards, likely to the point that [eventually] human drivers will be an aberration.
 
You're sort of ignoring that if they're as good as an average driver, then they're also better than half of the drivers on the road. You're not making the road more dangerous by adding cars that are as good as the average driver.
I don't think it's a grand idea to add autonomous cars to the mix if they just bring in their own different blind spots and weaknesses. But you're right, what bugs me more is the thought of the person inside the autonomous car being a better driver than the computer. That isn't right to me, and it does make the road more dangerous on an individual basis, by replacing a better driver with a computer.

Well, that's the interesting question. Would you accept less road injuries and deaths if they happen in ways that are different to the way that humans typically cause injuries and deaths now? I would, but that's because I like people and I'd like to see less of them injured or killed.
To me it's not that straightforward. I don't consider something like this incident equatable to a human accident, partly because the technology should been able to "see" the woman even in the dark, as has been said. I find it harder to accept than if the Uber employee was simply texting behind the wheel of a normal car. Maybe that's just me.

Yes, let's not implement any technology until it's perfect. That's not a fallacy at all.

The point is not to buy into the hype, and to actually use your own brain cells to consider what would be a useful level of technology to employ and what risks we might accept in order to use it.

Obviously you're one of those people who sees things as black or white with no in-between. I find it remarkable that you've made it 31 years without developing any idea of nuance
Accusing me of a fallacy while strawmanning me in the same breath? Mischaracterizing my views as black and white and then telling me I lack nuance? What's with the attitude?

Of course the technology doesn't have to be perfect, but so long as the average consumer believes that autonomous cars will do what it says on the tin, figuratively speaking -- and wisdom holds that companies should skip Level 3 autonomy and work on Level 4 autonomy for that reason -- it should be close enough to do better than the average driver. I think people should be able to depend on the technology being at least as safe as if they were driving themselves, whether they're someone who's always glued to their phone or a defensive driver.

I'm really hoping you come back and post that you were really drunk when you posted this and it's not exactly what you meant.
I'm hoping you come back and post that you were in a poor mood when you wrote this reply.

I recommend that you never fly on a plane.
Autopilot doesn't have to scan the skies for pedestrians or navigate cross traffic in a very close space. It is employed in a relatively controlled environment, and does little more than monitor specified instrument readings and operate the plane's control surfaces to maintain those readings. Computers are good for singular mundane tasks like that.

Driving a car down here on the ground is complex by comparison, requiring a whole new dimension of awareness and cognition (or a digital mimicry of it) to navigate a range of hazards. It's not the same.

But why? We choose a standard of driving for the road not because that's the average driver skill, but because that's deemed to be the minimum safe level. That's what driver testing is for (and whether it succeeds is another discussion).

If there should be a higher standard for computers, what's the argument for not holding human drivers to that standard also? If that's what is required to be safe, I don't see why you'd let human drivers off the hook for performing poorly.
I have no kind words for what passes for your average driver these days -- in effect, I do hold people to a higher standard. @Mrs Wolfe tells me that after being together for some years, I raised her standard of driving and now she can't help but notice bad drivers and get aggravated. :P

I don't think there are any easy solutions, though. I believe driver education/testing is underfunded and lacking here in the states, and deserves more attention, but I know that won't solve everything.
 
skip Level 3 autonomy and work on Level 4
Do what now? My understanding is that each tier encompasses the functions of those before it, progressively requiring less input from the individual that previously would have had to do everything. By working on the "next" one, they're inherently working on the "current" one.



so long as the average consumer believes that autonomous cars will do what it says on the tin, figuratively speaking, it should be close enough to do better than the average driver
What the average consumer believes a vehicle is capable of shouldn't matter when taking full advantage of that presumed capability is still against the law. I don't know of any mass-market, conventional vehicle currently available to the American public that isn't capable of traveling at triple-digit speeds, but doing so is still against the law.
 
Back