Tesla driver dies in first fatal crash while using autopilot mode

  • Thread starter polysmut
  • 89 comments
  • 6,206 views
This is a case of someone putting too much faith in a new, relatively unproven product and technology. Like people have said, there was no way for the car to react properly to the situation, but if he really was watching a movie on the road then he was a 🤬 idiot and put the technology way above where it actually is in terms of capability.

Regardless, my thoughts go to his family.
 
This is a case of someone putting too much faith in a new, relatively unproven product and technology. Like people have said, there was no way for the car to react properly to the situation, but if he really was watching a movie on the road then he was a 🤬 idiot and put the technology way above where it actually is in terms of capability.

Regardless, my thoughts go to his family.

In this case yes, but that's also what the idea of a beta is. I'm sure Tesla will be dissecting what happened to improve the technology. It's sad that a death happened, but it's an opportunity to improve the system.
 
In this case yes, but that's also what the idea of a beta is. I'm sure Tesla will be dissecting what happened to improve the technology. It's sad that a death happened, but it's an opportunity to improve the system.
It always is an opportunity, I suppose.
 
Even the best autopilot systems will still need human intervention at some point. Airplanes don't even have the ability to takeoff and land on their own using autopilot. I highly doubt they ever will.

Hell, even if every car on the road was autonomous, you still have to factor in system failures being a possibility. Driverless cars will probably never be 100% driverless.

Well, drones like the Predator family and X-47B are capable of landing on their own, but they are obviously nowhere near the size of a passenger jet or cargo plane.
 
Driverless cars will be a thing someday, so eventually you'll be able to kick your feet up and watch a movie while your car does all the work...

But that's not the reality today, Mr. Tesla driver. So enjoy your Darwin Award.
 
Dan
Well, drones like the Predator family and X-47B are capable of landing on their own, but they are obviously nowhere near the size of a passenger jet or cargo plane.

They're also drones.

That kinda takes us to another point: namely the "trust" us as passengers would be willing to give. Nobody gives a 🤬 if a drone crashes, nothing of real value is lost there, so we allow them to land by themselves...no worries there. But as a passenger on a plane, would you jump in the aircraft if there was no pilot on board? Personally, I wouldn't.
 
We that choose to get in plane, do at our own risk. We're still at the mercy of the captain(not to mention the mechanics). Most of the time, we have to listen to the flight attendants anyway. Sox if anything happensz we rely on them.
Someone controlling the plane from the ground would be fine. Weather and maintenance make up the rest of the "odds".

There are more cars and more likely for collisions. This won't be the last incident.
 
I think part of the problem is that eventually the system will be fully autonomous. Tesla's stuck between a rock and a hard place - it has developed a system with autonomy and can only properly develop it with the use of its customers (since the system works via machine learning), but on the other hand some of those customers can't be trusted not to treat the system as a fully working setup.

Eventhough the autopilot in the new E Class works better, Mercedes is forcing the driver to keep his/her hand on the steering wheel - yet people still found ways to abuse the system.

Hardware (radars/cameras) and software (f.e. Car2x still in early stages) are our main limitations for autonomous driving right now. Upcoming cars won't rely on cameras for cruise control anymore, they will be equipped with advanced radars and laser scanners.
 
Today I was checking the newsfeed of Jalopnik and found a cross-post from Gizmodo, their technology blog, in which an editor, the urbanism editor no less, says she "insists on banning humans from driving cars". The expected onslaught of comments calling her an idiot, myself included, arrive, most of them with perfectly valid points...but, mark my words, she'll play the "it's because I'm a woman, isn't it?" card that Gawker loves to play :rolleyes:.
If it's who I think it is, Jalopnik loves cross-posting her pieces for some reason. Probably good for traffic.
Simply put: no human creation will ever be humanless...
I don't agree with this at all. Not least because despite Autopilot still being in its beta phase, it's already managed to navigate people autonomously over a greater distance without any fatalities than ordinary human beings are capable of. That it's already demonstrably safer in its very, very early stages than the average human driver is a pretty good sign such systems will proliferate over the next few decades.

I know autonomous driving is an emotive subject among people who love driving themselves, but I do think people get worked up over a lot of nothing about it. While I expect it'll become commonplace over the coming years it's also fairly clear it won't be installed in every car (some may have semi-autonomous functions, but that'll take a while - radar cruise control is still the preserve of some reasonably expensive cars right now, for instance).

An autonomous Tesla or E-class isn't really such a bad thing, is it? I can't imagine a great proportion of people who buy such cars do it for the tactile joys of driving. In contrast, I can't see an autonomous Ferrari, or even an autonomous Miata coming along any time soon, so I'm not sure what all the fuss is about.

But then, people on the internet just like to complain about things. There are still swarms of people who irrationally hate electric cars, as if we're all being forced into driving them - the still relatively-low sales figures should convince most people that that's hardly the case.
 
I love how the blame is consistently being placed on the driver.

I don't have a problem with that, I just wonder when (if ever) Tesla will have to take responsibility. I doubt ever based on the fan support where people are saying, yes it's autonomous but not you can't leave the driving to the autonomous system, you have to watch it and monitor it.

Makes me wonder what the point is if you have to sit there and monitor the system. I mean, isn't that basically driving without putting in physical inputs?

At any rate, I hope this doesn't happen again but it will and as it happens more often Tesla will eventually be held liable. You simply can't build a machine to do a job and when it fouls-up (killing someone) expect everyone to keep blaming the person who is at the mercy of the machine.

TLDR: Don't call Teslas autonomous until they actually are because that's how you get inattentive dead drivers.
 
I just wonder when (if ever) Tesla will have to take responsibility.
When it's a fully functioning, fully-autonomous system the law will probably require them to.

At this stage I can't see anything wrong with what Tesla is doing - they've made very clear to drivers that it's in a "beta" phase of development, and that drivers must remain attentive as a result. Clicking away the little message that says "keep your eyes on the road at all times" or whatever and then letting a semi-autonomous car drive while you watch a movie is not remaining attentive.

Now is there a moral grey area over Tesla releasing a beta product of something that's potentially incredibly dangerous, i.e. a car? Possibly. You wouldn't stick paying passengers in a "beta" aircraft.

But again, assuming those people buying the cars are fully-aware adults who are made aware of the potential risks, perhaps that's okay. Now if an autonomous Tesla clatters into the side of a Corolla or something and kills the occupants having not spotted the car, the argument might be different.
 
When it's a fully functioning, fully-autonomous system the law will probably require them to.

At this stage I can't see anything wrong with what Tesla is doing - they've made very clear to drivers that it's in a "beta" phase of development, and that drivers must remain attentive as a result. Clicking away the little message that says "keep your eyes on the road at all times" or whatever and then letting a semi-autonomous car drive while you watch a movie is not remaining attentive.

Now is there a moral grey area over Tesla releasing a beta product of something that's potentially incredibly dangerous, i.e. a car? Possibly. You wouldn't stick paying passengers in a "beta" aircraft.

But again, assuming those people buying the cars are fully-aware adults who are made aware of the potential risks, perhaps that's okay. Now if an autonomous Tesla clatters into the side of a Corolla or something and kills the occupants having not spotted the car, the argument might be different.

Indeed, I agree for the most part. Though, I'd say it's not quite a grey area to release a deadly product as a beta... If a gun manufacturer did that it would be the end of the company. So on that alone I get the impression the government is being kind to them for reasons other than good business practices.

That said, fully aware adults who are made aware of the risks have been winning major law suits for years... Think big tobacco and hot coffee. Thing is, I don't think there is an anti-Tesla political sentiment in the US so I doubt anyone will be supportive of the plaintiff.
 
...despite Autopilot still being in its beta phase, it's already managed to navigate people autonomously over a greater distance without any fatalities than ordinary human beings are capable of. That it's already demonstrably safer in its very, very early stages than the average human driver is a pretty good sign such systems will proliferate over the next few decades.
I don't think a mere improvement of the statistical average is good enough. In my opinion, the stats aren't equatable, and this first death is damning. I don't blame Tesla for misuse of their system, but it still proves the technology is woefully inadequate.

A fully autonomous car should be better than the best human drivers in every possible circumstance -- practically perfect -- because I think it's completely unacceptable for anyone to die in an accident that could have been avoided easily enough if they were driving the car themselves. Are fewer fatalities worth it if some of them are preventable in that way? Computers, sensors, cameras, and radars are fallible and prone to different mistakes than humans, for unique reasons. The thought of dying in a dumb accident like this one is too ghastly for me to have any trust in autonomous cars.

Of course, Autopilot is not intended as a fully autonomous system (yet), but on the other end you have the Google car which cannot go anywhere they haven't mapped out in extensive detail ahead of time, pretty much the whole foundation of anything they hope to accomplish with it. And even when it's on a "known" street, it is no less fallible than Autopilot:
Another problem with maps is that once you make them, you have to keep them up to date, a challenge Google says it hasn't yet started working on. Considering all the traffic signals, stop signs, lane markings, and crosswalks that get added or removed every day throughout the country, keeping a gigantic database of maps current is vastly difficult. Safety is at stake here; Chris Urmson, director of the Google car team, told me that if the car came across a traffic signal not on its map, it could potentially run a red light, simply because it wouldn't know to look for the signal. Urmson added, however, that an unmapped traffic signal would be "very unlikely," because during the "time and construction" needed to build a traffic signal, there would be adequate opportunity to add it to the map.

But not always. Scott Heydt, director of marketing at Horizon Signal Technologies, says his company routinely sets up its portable traffic signals at road construction sites. Frequently, they are simply towed to a site and turned on. "We just set one up like that in New Jersey," said Heydt. "You can be driving to work and everything is normal, but on your way home, discover a new traffic light." (Of this possibility, a Google spokesperson said, “We will have to be ready for that.”)

...[the car's] video cameras can sometimes be blinded by the sun when trying to detect the color of a traffic signal. Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. (Google specifically confirmed these present shortcomings to me for the MIT Technology Review article.) Can the car currently "see" another vehicle's turn signals or brake lights? Can it tell the difference between the flashing lights on top of a tow truck and those on top of an ambulance? If it's driving past a school playground, and a ball rolls out into the street, will it know to be on special alert? (Google declined to respond to these additional questions when I posed them.)

The technology isn't anywhere near as ready as Tesla, Google, and automakers would like people to believe. As long as it's based on sensors/cameras/radar like we have today, I personally don't think it'll ever be good enough, no matter how sophisticated the software becomes. But you can never count out some radical innovation on the hardware side.
 
Are fewer fatalities worth it if some of them are preventable in that way?
In a black and white way? Yes. Fewer deaths means fewer deaths.

I know and you know that in this recent incident we'd have simply seen the truck ahead of time and slowed/stopped/steered to avoid it - though it's still unclear why the truck turned across the path of two lanes of traffic in the first place, if something like that Tesla was in visible distance and approaching at speed.

However, actual flesh-and-blood people do stupid stuff all the time. I believe the main cause of car accidents is not due to speed or intoxication, but lack of attention. And if you're looking at a spreadsheet and deduce that people are twice as likely to die driving themselves as they are to be driven by a computer, that's a pretty simple check in the "pros" box for autonomous cars.

Quite understandably you're looking at this scenario from the eyes of someone who is a competent enough driver that you can and probably have avoided potentially dangerous incidents in the past simply by paying attention.

But I'm sure you're equally aware that many drivers aren't like that - and while this case makes it very apparent that the technology has some way to go, I don't think it's fair to beat Tesla down too much for it, since they've made it very clear that their system isn't fully autonomous. Humans ultimately assume that risk, and by the looks of things, this particular driver took unsuitable risks - he even previously published videos where the car saved his ass because he effectively wasn't paying attention himself.

Edit: I should add for clarity that I'm far from a Tesla supporter/fanboy, as my posts in other threads should attest. But for me this particular incident seems fairly clear-cut: The technology obviously isn't ready, but Tesla has never claimed it was and has repeatedly said that users should keep their hands on the wheel and eyes on the road. If you tell someone "don't touch that fire, it's hot" and they get burned, that's not your fault for starting the fire...
 
If it's who I think it is, Jalopnik loves cross-posting her pieces for some reason. Probably good for traffic.

I don't agree with this at all. Not least because despite Autopilot still being in its beta phase, it's already managed to navigate people autonomously over a greater distance without any fatalities than ordinary human beings are capable of. That it's already demonstrably safer in its very, very early stages than the average human driver is a pretty good sign such systems will proliferate over the next few decades.

I know autonomous driving is an emotive subject among people who love driving themselves, but I do think people get worked up over a lot of nothing about it. While I expect it'll become commonplace over the coming years it's also fairly clear it won't be installed in every car (some may have semi-autonomous functions, but that'll take a while - radar cruise control is still the preserve of some reasonably expensive cars right now, for instance).

An autonomous Tesla or E-class isn't really such a bad thing, is it? I can't imagine a great proportion of people who buy such cars do it for the tactile joys of driving. In contrast, I can't see an autonomous Ferrari, or even an autonomous Miata coming along any time soon, so I'm not sure what all the fuss is about.

But then, people on the internet just like to complain about things. There are still swarms of people who irrationally hate electric cars, as if we're all being forced into driving them - the still relatively-low sales figures should convince most people that that's hardly the case.


It is who you think it is, it's incredible the kind of idiots they call "editors" these days! Worst of all was a White Knight that "apologized" on behalf of all car-lovers for answering her rudely :lol:.

Anyways, it appears I didn't make myself clear since that's not what I meant to say, I apologize. What I was trying to say is that anything created by humans is inherently related to them, it's designed by them, for them and mantained by them as well. From a purely logical point of view, almost no technology canexist outside humanity. The "autopilots", even if they weren't monitored by actual pilots/drivers, would still need to be programmed by humans and fed information by humans (like what was posted here already in regards to traffic signs and whatnot), and be upgraded and sustained by humans. In logical terms, any technological objet, bar one exception, is completely dependent on the human factor. The only possible exception would be, say, an artificial intelligence with an ability to procure it's own energy and perform it's own physical upgrades...which, like I said, would be a :censored:ing stupid thing to create from an evolutionary point of view since we'd essentially be creating a superior being that would threaten our own existance.

Ramblings aside, my point is that even "autopilots" are prone to human error because they're human creations. Those who believe otherwise are fooling themselves thinking technology is some kind of perfect, flawless entity. What we saw here was an "autopilot" making an error due to it's fundamentally human nature, i.e the humans who made it didn't foresee such a circumstance and couldn't enable the "autopilot" to answer appropiately, and the lack of a human in charge to correct said error. It's as simple as that, and yes, Tesla aren't the ones to blame.
 
Mobileye aren't to blame either. It's their system and they've stated that it's not meant to prevent an accident like this happening as it's something that the driver should have been aware of if he wasn't allegedly watching a movie.
 
In a black and white way? Yes. Fewer deaths means fewer deaths.

I know and you know that in this recent incident we'd have simply seen the truck ahead of time and slowed/stopped/steered to avoid it - though it's still unclear why the truck turned across the path of two lanes of traffic in the first place, if something like that Tesla was in visible distance and approaching at speed.

Truck Drivers view...
crash.jpg




crash2.png
 

Yikes...that's a bit of a delicate piece of road as well.

Maybe, the Tesla was travelling at relatively high speed which would be the reason why the truck driver idn't see him approaching and considered himself clear to make the turn. Again, maybe. I'd say there is a blatant lack of signs or anything really in that road. Even here we have bollards at that kind of intersections, sometimes even a warning light to remind the driver.
 
Ramblings aside, my point is that even "autopilots" are prone to human error because they're human creations.

No. You're either misunderstanding the nature of human error or the nature of machine logic. Machines err, that doesn't mean that they can err humanly.
 
No. You're either misunderstanding the nature of human error or the nature of machine logic. Machines err, that doesn't mean that they can err humanly.

Please explain then you'r definition of human error, I admit I'm using it quite liberally. But my point is, when machins err, doesn't their failure stem from the humans that created it/programmed it?
 
Thing is, I don't think there is an anti-Tesla political sentiment in the US so I doubt anyone will be supportive of the plaintiff.
Problem is that there is anti-Tesla sentiment in the US. They sell cars direct from the factory, which flies in the face of laws in half of the US as of 2015, laws that were pushed by the big three auto makers.
 
Please explain then you'r definition of human error, I admit I'm using it quite liberally. But my point is, when machins err, doesn't their failure stem from the humans that created it/programmed it?

Not necessarily - and a human error has to be committed by a human. Was the machine in this case built and programmed in an erroneous manner by the attendant humans? The large amount of successful testing would suggest not. To date the software and machinery have operated well within expected parameters.

If anything the data to hand suggests a real human error - the machine was being left to perform a task that the human had specifically requested from the software (AutoPilot is not enabled in that car by default) and the human was failing to supervise the machine despite such supervision being a clearly stated operational requirement.
 
Edit: I should add for clarity that I'm far from a Tesla supporter/fanboy, as my posts in other threads should attest. But for me this particular incident seems fairly clear-cut: The technology obviously isn't ready, but Tesla has never claimed it was and has repeatedly said that users should keep their hands on the wheel and eyes on the road. If you tell someone "don't touch that fire, it's hot" and they get burned, that's not your fault for starting the fire...

That's the thing, it's at least implied that you can sit back in a Tesla and it'll drive itself, not necessarily by Tesla themselves, but the whole hype surrounding self driving cars right now. I agree the driver is to blame if he puts too much faith in the system but the public perception supports it.
 
@homeforsummer -- People do stupid stuff, but an autonomous car can do stupider stuff in what would normally be relatively trivial circumstances, and as long as that remains true I don't think hard statistics are enough justification. If the principle is that computers are safer than humans, they should be at least as safe as humans in practically any scenario, because I don't see the point in trading one set of faults for a different set of faults. Who wants to be the statistic who died because an autonomous car didn't even react to something that the computer didn't expect, but would be obvious to us?

The technology is still in early development, but I doubt the necessary leap in sophistication is even possible to solve completely with cameras, radars, or lasers because they're so vulnerable to confusing or inadequate input. It's a delicate puppeteer act that very, very crudely simulates some of the intuition we humans evolved to possess. Such a system can absolutely save lives in situations it's capable of navigating almost every time, but it's frighteningly primitive for anything more than an advanced form of cruise control.

But that's exactly what Autopilot is currently intended to be, which is why I don't blame Tesla for this accident, like I already said before.
 
I don't know how I feel about this honestly, the driver was supposedly watching a Harry Potter movie. I don't know, if it was a proven commodity that had a great safety record, I can see doing something like that.

Personally if I had a self driving tesla(:crazy: is how much I like tesla vehicles, Nikola was the 🤬, the cars not so much) I would still be constantly watching the road, ready to take over in an instant if I noticed anything awry. Maybe he had gotten complacent with it too. Tesla will get sued of course, and probably lose, unless there was some kind of waiver they had the guy sign etc, I really don't know all that. When you build a self driving car, and it wrecks, it's hard to pin it on the driver, even if he was watching a boy wizard.

I'm Leary about self driving cars, only because I've seen many specials/documentaries, on how easy it is to hack into normal cars computers and basically take complete control of the car. But they probably are still way safer than human drivers, the people I see swerving and almost wrecking/causing wrecks because they are on their phone texting is frightening.

At least no one else was hurt in this, he didn't smash into a bunch of kids etc.
 
I hate to say this, but the accident was 100% the guy's fault and not Tesla's. He went into the test knowing that he was going to be working with unfinished, likely buggy, software and that the consequences of using that software in the wrong way are his and his alone. Not only that, but it's a well-known fact in the automotive community that Autopilot is meant as an aid and nothing more.

I don't know how I feel about this honestly, the driver was supposedly watching a Harry Potter movie. I don't know, if it was a proven commodity that had a great safety record, I can see doing something like that.

Personally if I had a self driving tesla(:crazy: is how much I like tesla vehicles, Nikola was the 🤬, the cars not so much) I would still be constantly watching the road, ready to take over in an instant if I noticed anything awry. Maybe he had gotten complacent with it too. Tesla will get sued of course, and probably lose, unless there was some kind of waiver they had the guy sign etc, I really don't know all that. When you build a self driving car, and it wrecks, it's hard to pin it on the driver, even if he was watching a boy wizard.

I'm Leary about self driving cars, only because I've seen many specials/documentaries, on how easy it is to hack into normal cars computers and basically take complete control of the car. But they probably are still way safer than human drivers, the people I see swerving and almost wrecking/causing wrecks because they are on their phone texting is frightening.

At least no one else was hurt in this, he didn't smash into a bunch of kids etc.
Being that he likely knew the risks going into the test (being a Tesla test driver and all), I don't see this holding water in court at all.
 
Back