Tesla driver dies in first fatal crash while using autopilot mode

  • Thread starter polysmut
  • 89 comments
  • 6,206 views
@homeforsummer -- People do stupid stuff, but an autonomous car can do stupider stuff in what would normally be relatively trivial circumstances, and as long as that remains true I don't think hard statistics are enough justification.
That isn't the way lawmakers will see things, rightly or wrongly. It isn't the way the public will see things either, when they're told that autonomous vehicles can massively cut the overall fatality rate.

I should point out however that an autonomous car won't technically do anything "stupid". It will either react to a situation or it won't, based on the tools available to it. Unfortunately in this incident the tools were insufficient.

Where an autonomous car will score over humans is that they won't do stupid stuff for the sake of it - it won't cut people up at junctions, it won't mindlessly swerve into someone else's lane, it won't jump red lights etc etc.
If the principle is that computers are safer than humans, they should be at least as safe as humans in practically any scenario, because I don't see the point in trading one set of faults for a different set of faults. Who wants to be the statistic who died because an autonomous car didn't even react to something that the computer didn't expect, but would be obvious to us?
A very valid point, but I suspect these instances will be in a minority. As Tesla's own statistics bear out, its cars have managed to travel 130 million miles on an effectively incomplete function before any fatalities resulted. In that time a human would have done something to get themselves killed a third earlier in the US and across the world two-and-a-bit human drivers will have been involved in fatal incidents in the same time period.
The technology is still in early development, but I doubt the necessary leap in sophistication is even possible to solve completely with cameras, radars, or lasers because they're so vulnerable to confusing or inadequate input. It's a delicate puppeteer act that very, very crudely simulates some of the intuition we humans evolved to possess.
I think you're underestimating the potential of these systems somewhat. I'd suggest they're not even trying to simulate human intuition because the alternative is so much more advanced than that.

Theoretically, these systems can be developed to "see" virtually everything in a given area and make decisions based on thousands of factors. A human driver is largely relying on intuition and experience because we essentially have to make lots of little guesses about things - our brain "fills in" most of what we actually "see" for instance, we have to make guesses on another vehicle's trajectory based on what we've seen in the past, a driver's "body language" (where they've positioned their car on the road, where their head is turned), and most fairly talented drivers will be able to tell if something doesn't quite look right.

A computer will generally have calculated all those human guesses down to a finite margin. It's not trying to crudely imitate a way a human would think - it's removing any guesswork entirely and basing decisions on the way things actually are.

Now the caveat here is that obviously the tools Tesla uses aren't sufficient right now. But is it so hard to imagine that essentially from this point forward, that sort of collision will never again be made by an autonomous car? Radar scanning the road further ahead and at a higher angle will immediately catch that truck (or car, or motorcycle, or combine harvester etc) and notice that it's moving when it shouldn't be and take appropriate action.

Of course, that's ignoring the next step - vehicle to vehicle communication - entirely. When autonomous vehicles are talking to each other then you don't even need to calculate things any more - the other vehicle will already have told your vehicle exactly where it is and what it's about to do.
I'm most interested to find out why the radar/lidar did not detect the enormous trailer.
Jalopnik hypothesised that the radar only scans the world at about bumper height - so it can spot pedestrians, kids, pets, the bumpers of other cars, walls etc, but because trailers over there seem to sit conveniently at windscreen height it simply wasn't aware of it. Ordinarily the camera system might have been a failsafe, but here it seems that it couldn't discern white trailer from bright sky.
 
That's the thing, it's at least implied that you can sit back in a Tesla and it'll drive itself, not necessarily by Tesla themselves, but the whole hype surrounding self driving cars right now. I agree the driver is to blame if he puts too much faith in the system but the public perception supports it.

To activate the software the driver has to go through several warning screens that are additional to the warning literature supplied with the car by Tesla. Personally I think "public perception" is completely moot here; you might think the car is (or should be) capable of full autonomy but I don't and I suspect that lots of other people don't either.

In any case the manufacturer very specifically stated that it wasn't. Things implied by the-internet-people and things stated by the OEM are two very different worlds, thankfully.
 
The closing rate/angle and apparently the bright Florida sky.

Neither of these would pose a problem to Radar nor Lidar.

Jalopnik hypothesised that the radar only scans the world at about bumper height - so it can spot pedestrians, kids, pets, the bumpers of other cars, walls etc, but because trailers over there seem to sit conveniently at windscreen height it simply wasn't aware of it. Ordinarily the camera system might have been a failsafe, but here it seems that it couldn't discern white trailer from bright sky.

I think radar is too dispersive for that to happen. My guess would be that Tesla is using some sort of Clutter Mitigation/High Pass filter on its radar. If the truck was moving perpendicular (low Doppler frequency, respectively) to the Tesla, then the radar's filter algorithm might have disregarded it. But I feel the Lidar still should have caught it. This might be the 'bumper height' issue as Lidar is substantially more focused. Maybe they need a few more lidar sensors....
 
I'd narrow it down to programming. If the system(s) could not read the trailer in that lighting situation, maybe something physical or systematic needs work.

Like a human can not look directly at the sun without damage to vision. With Sun glasses, the human eye can focus more. Maybe the sensors need calibration or modifications for reflection or angles at horizon.
 
A quick Google brought up this picture of a tesla that crashed while self parking

tesla-crash.jpg


It's obvious the car hasn't 'seen' the load on the back of the truck as it is above bumper height. If the car in the accident had the same issue, added to its cameras being made blind by the white trailer against to the bright sky, the car just did not see it. If the driver had been paying attention he'd have seen it and could've avoided the accident. If it was in Europe the majority of lorries have underframes or fairings on the bottom of their trailers, which the radar could have detected.
 
It says in this article that tesla tells people they should still keep their hands on the wheel at all time, and stay alert. There's also a bunch of videos in there of people filming themselves doing everything but paying attention.


Tesla drivers play Jenga, sleep, using Autopilot in nerve-wracking videos

http://usat.ly/29kFY1q
 
A quick Google brought up this picture of a tesla that crashed while self parking

tesla-crash.jpg


It's obvious the car hasn't 'seen' the load on the back of the truck as it is above bumper height. If the car in the accident had the same issue, added to its cameras being made blind by the white trailer against to the bright sky, the car just did not see it. If the driver had been paying attention he'd have seen it and could've avoided the accident. If it was in Europe the majority of lorries have underframes or fairings on the bottom of their trailers, which the radar could have detected.
This is why I mentioned there may need to be some recalibration or an adjustment to the sensors or programming.
 
It says in this article that tesla tells people they should still keep their hands on the wheel at all time, and stay alert. There's also a bunch of videos in there of people filming themselves doing everything but paying attention.


Tesla drivers play Jenga, sleep, using Autopilot in nerve-wracking videos

http://usat.ly/29kFY1q

So many Darwin awards, so little time... :lol:
 
So many Darwin awards, so little time... :lol:
Yeah I know right. The more I think on it, it's a probably a lot safer on the roads, if people like that do have a self driving car.

Common Sense is somehow getting left out of more and more people's brain functions these days.
 
...an autonomous car won't technically do anything "stupid". It will either react to a situation or it won't, based on the tools available to it. Unfortunately in this incident the tools were insufficient.

Where an autonomous car will score over humans is that they won't do stupid stuff for the sake of it - it won't cut people up at junctions, it won't mindlessly swerve into someone else's lane, it won't jump red lights etc etc.
The car itself technically isn't stupid, but an incident like this is stupid. Similar to how people who have done stupid things behind the wheel aren't necessarily stupid.

Autonomous cars will simply introduce new inconveniences or hazards. They'll do stupid things for reasons we might not expect, even if they're designed to err heavily on the side of caution (or especially because of that).

A very valid point, but I suspect these instances will be in a minority. As Tesla's own statistics bear out, its cars have managed to travel 130 million miles on an effectively incomplete function before any fatalities resulted. In that time a human would have done something to get themselves killed a third earlier in the US and across the world two-and-a-bit human drivers will have been involved in fatal incidents in the same time period.
Actually, I think the comparison doesn't really hold when you consider Autopilot's intended use. There's no statistic for fatal accidents in which a passenger was ready and able to take over for the driver...

I think you're underestimating the potential of these systems somewhat. I'd suggest they're not even trying to simulate human intuition because the alternative is so much more advanced than that.

Theoretically, these systems can be developed to "see" virtually everything in a given area and make decisions based on thousands of factors. A human driver is largely relying on intuition and experience because we essentially have to make lots of little guesses about things - our brain "fills in" most of what we actually "see" for instance, we have to make guesses on another vehicle's trajectory based on what we've seen in the past, a driver's "body language" (where they've positioned their car on the road, where their head is turned), and most fairly talented drivers will be able to tell if something doesn't quite look right.

A computer will generally have calculated all those human guesses down to a finite margin. It's not trying to crudely imitate a way a human would think - it's removing any guesswork entirely and basing decisions on the way things actually are.
I wasn't saying that the computer is trying to imitate the way a human would think. What I meant by intuition is how, for example, we see a tractor trailer and know it's a tractor trailer. Although the computer can track more things than we can, make decisions quicker than we can, and execute its chosen action more reliably than we can, it can't discern things as well as we can. The computer doesn't "know" what a tractor trailer is, or anything it "sees". Its analysis of its surroundings is only a crude imitation of what a human would perceive.
 
In the report I read of this accident it said that Tesla drivers cannot operate the onboard DVD player while driving because of safety features - this driver had plugged in a portable DVD player to bypass those systems so that he could watch a Harry Potter film while using the Teslas autopilot feature. No more stupid than playing Jenga or sleeping, but still utterly stupid nonetheless.
 
The car itself technically isn't stupid, but an incident like this is stupid. Similar to how people who have done stupid things behind the wheel aren't necessarily stupid.

Autonomous cars will simply introduce new inconveniences or hazards. They'll do stupid things for reasons we might not expect, even if they're designed to err heavily on the side of caution (or especially because of that).
My point is that those inconveniences or hazards will be on a much lesser scale than the ones humans create themselves.
Actually, I think the comparison doesn't really hold when you consider Autopilot's intended use. There's no statistic for fatal accidents in which a passenger was ready and able to take over for the driver...
Not quite sure what you're getting at here. Are you suggesting that the reason Autopilot has covered more miles before a fatality than the average human is because, as a semi-autonomous system, there's a human ultimately "controlling" it anyway? Or are you suggesting that the low accident statistics are because a human has always managed to take over at the right time to avoid an incident?

If the former I'd suggest that the car's relative autonomy is still likely to be safer than the average driver, and if the latter it's probably lending humans too much credit - despite Tesla's warnings, I'd be surprised if many Autopilot users do pay full attention to the road while the system is active. More probably will now...

Regardless, we live in an age of information. if Teslas were constantly getting themselves into dangerous situations that drivers were having to take over from, we'd have heard about it by now.
I wasn't saying that the computer is trying to imitate the way a human would think. What I meant by intuition is how, for example, we see a tractor trailer and know it's a tractor trailer. Although the computer can track more things than we can, make decisions quicker than we can, and execute its chosen action more reliably than we can, it can't discern things as well as we can. The computer doesn't "know" what a tractor trailer is, or anything it "sees". Its analysis of its surroundings is only a crude imitation of what a human would perceive.
That makes sense, but then it depends what value you put on perception.

Obviously in this particular case a human might have been able to avoid an incident, but human perception is hardly infallible. Experienced drivers can make guesses at say, the speed and trajectory of a vehicle they're approaching or is approaching them, but they're still only guesses. An autonomous system can, theoretically, build an exact mathematical picture of what's going on around it and navigate accordingly. There's no guesswork or intuition needed in such a situation.

Let's not forget that humans also have a remarkable ability to simply not see things in the first place, too. You're a motorcyclist, aren't you? I wonder how many times you've had to swerve or brake, or even been knocked off, by a driver who claimed they didn't even see you approaching. I wonder if most bikers would put more faith in an autonomous system not to try and kill them than they would a human driver?
 
My point is that those inconveniences or hazards will be on a much lesser scale than the ones humans create themselves.
I'm not convinced, but that's fair.

Not quite sure what you're getting at here. Are you suggesting that the reason Autopilot has covered more miles before a fatality than the average human is because, as a semi-autonomous system, there's a human ultimately "controlling" it anyway? Or are you suggesting that the low accident statistics are because a human has always managed to take over at the right time to avoid an incident?

If the former I'd suggest that the car's relative autonomy is still likely to be safer than the average driver, and if the latter it's probably lending humans too much credit - despite Tesla's warnings, I'd be surprised if many Autopilot users do pay full attention to the road while the system is active. More probably will now...

Regardless, we live in an age of information. if Teslas were constantly getting themselves into dangerous situations that drivers were having to take over from, we'd have heard about it by now.
I mangled the point after one too many rewrites, but it's a little of both.

As a semi-autonomous system, not only is a human ultimately in charge, but I would assume Autopilot is somewhat more likely to be used in situations where the owner would gladly allow the car to take over; boring and relatively simple parts of a commute, like stop-and-go traffic on the freeway. We don't know how many users are using it for virtually everything. And of course, it can't do everything yet, including driving in snow. That one limitation alone is enough to make it an unfair comparison.

I'm sure you're right that many Autopilot users don't pay full attention to the road while the system is active, but again, we don't know how many users are using it responsibly or not. I didn't mean to suggest that Teslas have been getting into trouble and users have been taking over at the last minute, but I can imagine instances where an attentive Autopilot user would take over to navigate something before the car would have a chance to act on it (or not). Not necessarily because they don't trust the system, but because it's a natural reaction as a driver, or because they wouldn't necessarily regard Autopilot as anything more than an advanced form of cruise control.

So it's fine as a plain safety statistic for Autopilot itself, but I think it's a bit of a stretch to compare it to all the miles human drivers cover.

That makes sense, but then it depends what value you put on perception.

Obviously in this particular case a human might have been able to avoid an incident, but human perception is hardly infallible. Experienced drivers can make guesses at say, the speed and trajectory of a vehicle they're approaching or is approaching them, but they're still only guesses. An autonomous system can, theoretically, build an exact mathematical picture of what's going on around it and navigate accordingly. There's no guesswork or intuition needed in such a situation.

Let's not forget that humans also have a remarkable ability to simply not see things in the first place, too. You're a motorcyclist, aren't you? I wonder how many times you've had to swerve or brake, or even been knocked off, by a driver who claimed they didn't even see you approaching. I wonder if most bikers would put more faith in an autonomous system not to try and kill them than they would a human driver?
I don't consider human fallibility an excuse for an incident like this. For all we know, Brown might not have been able to avoid the crash even if he was driving, but the Tesla reportedly didn't even flinch. If Autopilot mowed down any pedestrian in a white shirt in its path because it didn't react, that's unacceptable, even if there were statistically fewer fatalities or you can point to humans who run over pedestrians because they were looking at their phone. I feel autonomous cars should be held to a much higher standard because of their unique potential for failure.

I've personally never had an accident on my motorcycle with another vehicle, and I've been lucky enough to be able to count on one hand the number of times I've had to swerve or brake to avoid another vehicle. Given my life experience with electronics and digital devices, I certainly wouldn't trust an autonomous car any more than I would a human driver not to try and kill me on my bike. In situations where the car has every reason to see me (say, following behind me), I might trust the human more than the autonomous car, and I'm quite cynical toward other drivers.

Honestly, I personally don't understand how anyone who lives with electronics wouldn't expect by default for autonomous cars to screw up or go into panic mode for the dumbest reasons, especially once more companies get in on building them. That's where I'm coming from. Even if such events are statistically rare, I can't stand the thought of myself or someone I love being involved in an incident with such a stupid cause, and I can't relate to accepting it as better than human fallibility.
 
If the guy was actually keeping his eyes on the road and not watching a movie, he would have seen the truck crossing the road ahead of him and reacted. The system works nearly flawlessly in the manner in which it's suppose to be operated.
 
Honestly, I personally don't understand how anyone who lives with electronics wouldn't expect by default for autonomous cars to screw up or go into panic mode for the dumbest reasons, especially once more companies get in on building them. That's where I'm coming from.
I think it's possibly unfair to treat autonomous cars as a sort of glorified iPhone or laptop that can have a spasm at any time and go into meltdown. In general they'll be built with massive redundancy exactly because a car is a big heavy object that travels at speed and such things can be dangerous. They're also jolly expensive objects with much smaller profit margins than the average smartphone or whatever (that costs a couple of dollars to build even if it sells for hundreds), and that expense buys extra care and expertise.

As I mentioned before, I can see the sort of incident that happened here effectively never happening again. A very particular set of circumstances came together that the car didn't catch and if there's any sort of electronic blind spot you can be sure it'll have been fixed. And that'll have probably made the cars exponentially safer as numerous other potentially dangerous scenarios may be covered by the same fix.

Ultimately this incident was human error - a human didn't give the hardware and software on the car the ability to see the truck. Write that blind spot out and the car will - in theory - see the truck every single time without fail.
Even if such events are statistically rare, I can't stand the thought of myself or someone I love being involved in an incident with such a stupid cause
Nor can I - but statistically, you, or I, or our loved ones, are considerably more likely to be extinguished by a drunk driver, or someone staring at their smartphone instead of the road, or a tired person, or someone in emotional tatters for whatever reason, or any other number of situations under the umbrella of human fallibility, than we are by an autonomous vehicle.

Or to put it another way, while I'd not want an autonomous vehicle myself, I'd be more than happy for every other car around me to be driven by a computer rather than a human. On yesterday's commute I had to stand on the brakes twice for drivers doing things that an autonomous vehicle would never do - one oncoming vehicle overtaking in an absolutely ridiculous place, and another simply pulling out of a side road in front of me on a 60mph road. If every other car was autonomous I'd also get to sit at 60mph on my morning commute instead of following people needlessly varying between about 30mph and 50mph.
 
@homeforsummer -- Provided perfect information about their surroundings, I believe computers are smart enough to do what autonomous cars promise to do without any spasms, but they're ultimately only as smart as their input allows. I'm not convinced that any amount of machine learning and refinement can overcome the liabilities of the sensors the Model S or Google car are equipped with.

Like you, I think it would be great if poor drivers were shuttled around instead of creating inconveniences and hazards on the road. But I think you'll agree we're still very early along in the process. Tesla acknowledges this, and Brown was at fault for what happened here (@GTPorsche), but I can't agree with citing the statistics to defend Autopilot's autonomous driving record or imply it's already superior to human drivers.
 
A quick Google brought up this picture of a tesla that crashed while self parking

tesla-crash.jpg


It's obvious the car hasn't 'seen' the load on the back of the truck as it is above bumper height. If the car in the accident had the same issue, added to its cameras being made blind by the white trailer against to the bright sky, the car just did not see it. If the driver had been paying attention he'd have seen it and could've avoided the accident. If it was in Europe the majority of lorries have underframes or fairings on the bottom of their trailers, which the radar could have detected.

Looks like an illegal trailer anyway. Where's the stop bar?
 
To activate the software the driver has to go through several warning screens that are additional to the warning literature supplied with the car by Tesla. Personally I think "public perception" is completely moot here; you might think the car is (or should be) capable of full autonomy but I don't and I suspect that lots of other people don't either.

In any case the manufacturer very specifically stated that it wasn't. Things implied by the-internet-people and things stated by the OEM are two very different worlds, thankfully.

I don't believe it's completely autonomous, in fact personally I think Mercedes' systems on the road today are far safer because they are marketed as mere assists, and the autonomous Google car is taking the wrong approach entirely to ever become feasible.

I agree an actual Tesla owner will have a better understanding of the autopilot, but this owner very much was part of those "internet-people". You have to agree there's a rather special fan culture surrounding Musk and Tesla and a considerable overlap with actual owners.

Looks like an illegal trailer anyway. Where's the stop bar?

I don't think that's the point, laws across the world differ and drivers' regard for them even more. The important part is that the car should have recognized the object regardless and stopped, because if it can't truck trailers in general may pose a huge danger.
 
Looks like an illegal trailer anyway. Where's the stop bar?

Regardless, what is the point of these so-called "stop bars"?

At highway speeds that weak little bar cannot hope to stop a 3000lb+ hunk of metal hitting it. Which, in most cases, leads to a decapitated occupant.
 
Regardless, what is the point of these so-called "stop bars"?

At highway speeds that weak little bar cannot hope to stop a 3000lb+ hunk of metal hitting it. Which, in most cases, leads to a decapitated occupant.

Actually, that's the point of stop bars. They're supposed to be strong enough to withstand a hit like that. Most of them aren't. It's something that I think should be regulated. (gasp!)
 
Actually, that's the point of stop bars. They're supposed to be strong enough to withstand a hit like that. Most of them aren't. It's something that I think should be regulated. (gasp!)

I knew that that was what they're there for. But, like you said the majority of them don't. Now that I have a Miata i'm hyper sensitive about following a big rig. I'll altogether avoid it by changing lanes or keeping an ultra safe distance.

 
Now that I have a Miata i'm hyper sensitive about following a big rig. I'll altogether avoid it by changing lanes or keeping an ultra safe distance.
I'm not sure you really have much to worry about. Compared to even an empty big rig your Miata will stop on a dime. Doesn't account for the person behind you obviously, but I'd not worry about avoiding other traffic like the plague just because you're driving a smaller car.

Of course, keeping a safe distance should be part of day-to-day driving anyway. Mentioning it specifically here suggests you didn't before...
 
I'm not sure you really have much to worry about. Compared to even an empty big rig your Miata will stop on a dime. Doesn't account for the person behind you obviously, but I'd not worry about avoiding other traffic like the plague just because you're driving a smaller car.

Of course, keeping a safe distance should be part of day-to-day driving anyway. Mentioning it specifically here suggests you didn't before...

Not at all, I was saying I try to avoid them all together. i.e. Get out of the lane that they're in.

That's a good point, but anyway I still would rather not be in their lane. Rock chips is another excellent reason.

On the contrary I'm a very defensive driver and I am always aware of those around me and what they may or may not do. I try not to put myself in positions where i'll be stuck but none of us are perfect. :)

I suppose I would also be in a bad situation if I were to rear end a trailer at high speed in the S4. But it would be my fault.
 
Last edited:
Back