I don't think it's a grand idea to add autonomous cars to the mix if they just bring in their own different blind spots and weaknesses. But you're right, what bugs me more is the thought of the person inside the autonomous car being a better driver than the computer. That isn't right to me, and it does make the road more dangerous on an individual basis, by replacing a better driver with a computer.
If you're going to cherry pick the situation where a better driver is replaced by this equivalent-to-an-average-driver AI, then there's no discussing things with you. Come on, man.
We live in a world where autonomous cars would replace all sorts of drivers, from good ones to awful ones. In fact, I'd suggest that it's likely that it will preferentially replace worse drivers first. Bad drivers tend not to really enjoy driving, whereas good drivers often do. I'd suggest that bad drivers would be more eager to get an autonomous car that removes a task that they don't like than good drivers who could take it or leave it. But that's merely a suggestion, I don't claim that it would necessarily be the case.
Still, we're talking about autonomous cars in generalities here which means working with the general population. That means I need you not to cherry pick situations in which you put an excellent human driver up against a middling autonomous AI and ignore the overall population statistics.
To me it's not that straightforward. I don't consider something like this incident equatable to a human accident, partly because the technology should been able to "see" the woman even in the dark, as has been said. I find it harder to accept than if the Uber employee was simply texting behind the wheel of a normal car. Maybe that's just me.
It's a straightforward question that you're avoiding answering, presumably because it means that you'd have to say that you'd prefer more deaths done by humans. I don't know because you didn't answer, so I guess I'll just assume unless you want to clarify.
Accusing me of a fallacy while strawmanning me in the same breath?
I'm sorry, did you or did you not say
"what value is there in a computer's millisecond-scale perfect reactions if the computer may fail to act in the first place, or act erroneously?"
Characterising that as "not implementing a technology until it's perfect" is pretty accurate. You're suggesting that computer reactions have no value if they can fail or act in error, or at least that's what I got from that sentence. Perhaps I misunderstood, in which case you're welcome to clarify that too.
Mischaracterizing my views as black and white and then telling me I lack nuance?
See above. If the view is truly that computers are of no value until they can perfectly react, that's a pretty black and white view. A nuanced view would be that there's a certain level of performance that a computer system would need to meet, and that would be acceptable regardless of whether the computer system in question is performing to the limits of it's theoretical capabilities.
What's with the attitude?
What's with accepting a less safe driver simply because it happens to be human? I dislike people who espouse views that would result in more people being killed than otherwise. That's why I asked you above whether you'd accept computers if they were to result in less deaths, something that you dodged.
If you won't answer, I'll assume that it's because you'd rather have more people die at human hands than trust a machine.
Of course the technology doesn't have to be perfect, but so long as the average consumer believes that autonomous cars will do what it says on the tin, figuratively speaking -- and wisdom holds that companies should skip Level 3 autonomy and work on Level 4 autonomy for that reason -- it should be close enough to do better than the average driver. I think people should be able to depend on the technology being at least as safe as if they were driving themselves, whether they're someone who's always glued to their phone or a defensive driver.
How can you say in the same paragraph that autonomous cars should be better than the average driver
and that people should be able to depend on the technology being at least as safe as if they were driving themselves? Do you expect autonomous cars to be better than
all human drivers, or do you just not understand statistics?
I do find it amusing that you've backpedaled to pretty much exactly what I was saying though: autonomous cars really only need to be equivalent to or better than the average driver to provide a benefit. I'm glad we could see eye to eye on this.
I'm hoping you come back and post that you were in a poor mood when you wrote this reply.
I was afterwards. Replying to people who are willing to accept extra deaths because they're uncomfortable with technological advance make me angry.
I'll be honest, I still think you're an [insert word here] for being willing to accept humans driving and a higher road toll rather than have autonomous vehicles that make you uncomfortable but kill less people. But I'm human, and I'm free to judge people for their espoused opinions.
Perhaps one day I'll learn more about you and develop a more nuanced view, but at the moment I've only got a few data points and I frankly don't like your acceptance of road deaths just because they're caused by humans.
Autopilot doesn't have to scan the skies for pedestrians or navigate cross traffic in a very close space. It is employed in a relatively controlled environment, and does little more than monitor specified instrument readings and operate the plane's control surfaces to maintain those readings. Computers are good for singular mundane tasks like that.
Driving a car down here on the ground is complex by comparison, requiring a whole new dimension of awareness and cognition (or a digital mimicry of it) to navigate a range of hazards. It's not the same.
It's still a computer controlling your vehicle, and so there's "no one behind the wheel". It still has the same problems of not having perfect reactions, that may fail to act or act in error. But that's less of a problem in a plane because it doesn't result in an accident before the pilot can act (most of the time). I don't disagree, autopilots have been around for a long time because the environment is far better suited to computer control. Autonomous vehicles are only starting to become common now because sensor and computing technology is just starting to become capable of dealing with the more complex environment.
Now we're starting to get to a nuanced opinion. You can accept that a computer system only needs to be capable to a level appropriate for the situation that it's in. It doesn't need "milli-second scale perfect reactions", it simply needs reactions appropriate to deal with the hazards that it would normally face. Realistically for a car, it needs reactions appropriate to the time scale of the controls of the car. Likely tenths to hundredths of a second at best, because below that it doesn't make any meaningful difference to the movement of the car. Once any driver makes the choice to brake it takes seconds to slow to a stop from speeds that would be fatal, and so there's severely diminishing returns on upping the speed of reaction. A tenth of a second after detection of a hazard is more than adequate, and is
way higher than any human could hope to accomplish without mad reflexes and hovering the brake.
So perhaps instead of your computer system that needs millisecond perfect reactions without the failure or error, perhaps we could agree that what an autonomous car actually needs is adequate reactions with a reasonable error rate that would perform equal to or better than the majority of humans and tends to fail in a safe way?
This is essentially what a plane has, it's a system that mostly reacts capably within the confines of it's expected environment, has few fatal errors and when it does fail tends to do so in a way that is either fundamentally safe or returns control to a human in a way that they can be expected to deal with it. It is as safe or safer than your average pilot (which is something considering all the training that they go through and the safety expectations).
Does this sound like a more reasonable expectation for an autonomous car?