- 11,791
- Marin County
I'm most interested to find out why the radar/lidar did not detect the enormous trailer.
The closing rate/angle and apparently the bright Florida sky.I'm most interested to find out why the radar/lidar did not detect the enormous trailer.
I thought this was Ohio...The closing rate/angle and apparently the bright Florida sky.
That isn't the way lawmakers will see things, rightly or wrongly. It isn't the way the public will see things either, when they're told that autonomous vehicles can massively cut the overall fatality rate.@homeforsummer -- People do stupid stuff, but an autonomous car can do stupider stuff in what would normally be relatively trivial circumstances, and as long as that remains true I don't think hard statistics are enough justification.
A very valid point, but I suspect these instances will be in a minority. As Tesla's own statistics bear out, its cars have managed to travel 130 million miles on an effectively incomplete function before any fatalities resulted. In that time a human would have done something to get themselves killed a third earlier in the US and across the world two-and-a-bit human drivers will have been involved in fatal incidents in the same time period.If the principle is that computers are safer than humans, they should be at least as safe as humans in practically any scenario, because I don't see the point in trading one set of faults for a different set of faults. Who wants to be the statistic who died because an autonomous car didn't even react to something that the computer didn't expect, but would be obvious to us?
I think you're underestimating the potential of these systems somewhat. I'd suggest they're not even trying to simulate human intuition because the alternative is so much more advanced than that.The technology is still in early development, but I doubt the necessary leap in sophistication is even possible to solve completely with cameras, radars, or lasers because they're so vulnerable to confusing or inadequate input. It's a delicate puppeteer act that very, very crudely simulates some of the intuition we humans evolved to possess.
Jalopnik hypothesised that the radar only scans the world at about bumper height - so it can spot pedestrians, kids, pets, the bumpers of other cars, walls etc, but because trailers over there seem to sit conveniently at windscreen height it simply wasn't aware of it. Ordinarily the camera system might have been a failsafe, but here it seems that it couldn't discern white trailer from bright sky.I'm most interested to find out why the radar/lidar did not detect the enormous trailer.
That's the thing, it's at least implied that you can sit back in a Tesla and it'll drive itself, not necessarily by Tesla themselves, but the whole hype surrounding self driving cars right now. I agree the driver is to blame if he puts too much faith in the system but the public perception supports it.
The closing rate/angle and apparently the bright Florida sky.
Jalopnik hypothesised that the radar only scans the world at about bumper height - so it can spot pedestrians, kids, pets, the bumpers of other cars, walls etc, but because trailers over there seem to sit conveniently at windscreen height it simply wasn't aware of it. Ordinarily the camera system might have been a failsafe, but here it seems that it couldn't discern white trailer from bright sky.
This is why I mentioned there may need to be some recalibration or an adjustment to the sensors or programming.A quick Google brought up this picture of a tesla that crashed while self parking
It's obvious the car hasn't 'seen' the load on the back of the truck as it is above bumper height. If the car in the accident had the same issue, added to its cameras being made blind by the white trailer against to the bright sky, the car just did not see it. If the driver had been paying attention he'd have seen it and could've avoided the accident. If it was in Europe the majority of lorries have underframes or fairings on the bottom of their trailers, which the radar could have detected.
This is why I mentioned there may need to be some recalibration or an adjustment to the sensors or programming.
Perhaps a secondary roof mounted radar would do the trick?
It says in this article that tesla tells people they should still keep their hands on the wheel at all time, and stay alert. There's also a bunch of videos in there of people filming themselves doing everything but paying attention.
Tesla drivers play Jenga, sleep, using Autopilot in nerve-wracking videos
http://usat.ly/29kFY1q
Yeah I know right. The more I think on it, it's a probably a lot safer on the roads, if people like that do have a self driving car.So many Darwin awards, so little time...
The car itself technically isn't stupid, but an incident like this is stupid. Similar to how people who have done stupid things behind the wheel aren't necessarily stupid....an autonomous car won't technically do anything "stupid". It will either react to a situation or it won't, based on the tools available to it. Unfortunately in this incident the tools were insufficient.
Where an autonomous car will score over humans is that they won't do stupid stuff for the sake of it - it won't cut people up at junctions, it won't mindlessly swerve into someone else's lane, it won't jump red lights etc etc.
Actually, I think the comparison doesn't really hold when you consider Autopilot's intended use. There's no statistic for fatal accidents in which a passenger was ready and able to take over for the driver...A very valid point, but I suspect these instances will be in a minority. As Tesla's own statistics bear out, its cars have managed to travel 130 million miles on an effectively incomplete function before any fatalities resulted. In that time a human would have done something to get themselves killed a third earlier in the US and across the world two-and-a-bit human drivers will have been involved in fatal incidents in the same time period.
I wasn't saying that the computer is trying to imitate the way a human would think. What I meant by intuition is how, for example, we see a tractor trailer and know it's a tractor trailer. Although the computer can track more things than we can, make decisions quicker than we can, and execute its chosen action more reliably than we can, it can't discern things as well as we can. The computer doesn't "know" what a tractor trailer is, or anything it "sees". Its analysis of its surroundings is only a crude imitation of what a human would perceive.I think you're underestimating the potential of these systems somewhat. I'd suggest they're not even trying to simulate human intuition because the alternative is so much more advanced than that.
Theoretically, these systems can be developed to "see" virtually everything in a given area and make decisions based on thousands of factors. A human driver is largely relying on intuition and experience because we essentially have to make lots of little guesses about things - our brain "fills in" most of what we actually "see" for instance, we have to make guesses on another vehicle's trajectory based on what we've seen in the past, a driver's "body language" (where they've positioned their car on the road, where their head is turned), and most fairly talented drivers will be able to tell if something doesn't quite look right.
A computer will generally have calculated all those human guesses down to a finite margin. It's not trying to crudely imitate a way a human would think - it's removing any guesswork entirely and basing decisions on the way things actually are.
My point is that those inconveniences or hazards will be on a much lesser scale than the ones humans create themselves.The car itself technically isn't stupid, but an incident like this is stupid. Similar to how people who have done stupid things behind the wheel aren't necessarily stupid.
Autonomous cars will simply introduce new inconveniences or hazards. They'll do stupid things for reasons we might not expect, even if they're designed to err heavily on the side of caution (or especially because of that).
Not quite sure what you're getting at here. Are you suggesting that the reason Autopilot has covered more miles before a fatality than the average human is because, as a semi-autonomous system, there's a human ultimately "controlling" it anyway? Or are you suggesting that the low accident statistics are because a human has always managed to take over at the right time to avoid an incident?Actually, I think the comparison doesn't really hold when you consider Autopilot's intended use. There's no statistic for fatal accidents in which a passenger was ready and able to take over for the driver...
That makes sense, but then it depends what value you put on perception.I wasn't saying that the computer is trying to imitate the way a human would think. What I meant by intuition is how, for example, we see a tractor trailer and know it's a tractor trailer. Although the computer can track more things than we can, make decisions quicker than we can, and execute its chosen action more reliably than we can, it can't discern things as well as we can. The computer doesn't "know" what a tractor trailer is, or anything it "sees". Its analysis of its surroundings is only a crude imitation of what a human would perceive.
I'm not convinced, but that's fair.My point is that those inconveniences or hazards will be on a much lesser scale than the ones humans create themselves.
I mangled the point after one too many rewrites, but it's a little of both.Not quite sure what you're getting at here. Are you suggesting that the reason Autopilot has covered more miles before a fatality than the average human is because, as a semi-autonomous system, there's a human ultimately "controlling" it anyway? Or are you suggesting that the low accident statistics are because a human has always managed to take over at the right time to avoid an incident?
If the former I'd suggest that the car's relative autonomy is still likely to be safer than the average driver, and if the latter it's probably lending humans too much credit - despite Tesla's warnings, I'd be surprised if many Autopilot users do pay full attention to the road while the system is active. More probably will now...
Regardless, we live in an age of information. if Teslas were constantly getting themselves into dangerous situations that drivers were having to take over from, we'd have heard about it by now.
I don't consider human fallibility an excuse for an incident like this. For all we know, Brown might not have been able to avoid the crash even if he was driving, but the Tesla reportedly didn't even flinch. If Autopilot mowed down any pedestrian in a white shirt in its path because it didn't react, that's unacceptable, even if there were statistically fewer fatalities or you can point to humans who run over pedestrians because they were looking at their phone. I feel autonomous cars should be held to a much higher standard because of their unique potential for failure.That makes sense, but then it depends what value you put on perception.
Obviously in this particular case a human might have been able to avoid an incident, but human perception is hardly infallible. Experienced drivers can make guesses at say, the speed and trajectory of a vehicle they're approaching or is approaching them, but they're still only guesses. An autonomous system can, theoretically, build an exact mathematical picture of what's going on around it and navigate accordingly. There's no guesswork or intuition needed in such a situation.
Let's not forget that humans also have a remarkable ability to simply not see things in the first place, too. You're a motorcyclist, aren't you? I wonder how many times you've had to swerve or brake, or even been knocked off, by a driver who claimed they didn't even see you approaching. I wonder if most bikers would put more faith in an autonomous system not to try and kill them than they would a human driver?
I think it's possibly unfair to treat autonomous cars as a sort of glorified iPhone or laptop that can have a spasm at any time and go into meltdown. In general they'll be built with massive redundancy exactly because a car is a big heavy object that travels at speed and such things can be dangerous. They're also jolly expensive objects with much smaller profit margins than the average smartphone or whatever (that costs a couple of dollars to build even if it sells for hundreds), and that expense buys extra care and expertise.Honestly, I personally don't understand how anyone who lives with electronics wouldn't expect by default for autonomous cars to screw up or go into panic mode for the dumbest reasons, especially once more companies get in on building them. That's where I'm coming from.
Nor can I - but statistically, you, or I, or our loved ones, are considerably more likely to be extinguished by a drunk driver, or someone staring at their smartphone instead of the road, or a tired person, or someone in emotional tatters for whatever reason, or any other number of situations under the umbrella of human fallibility, than we are by an autonomous vehicle.Even if such events are statistically rare, I can't stand the thought of myself or someone I love being involved in an incident with such a stupid cause
A quick Google brought up this picture of a tesla that crashed while self parking
It's obvious the car hasn't 'seen' the load on the back of the truck as it is above bumper height. If the car in the accident had the same issue, added to its cameras being made blind by the white trailer against to the bright sky, the car just did not see it. If the driver had been paying attention he'd have seen it and could've avoided the accident. If it was in Europe the majority of lorries have underframes or fairings on the bottom of their trailers, which the radar could have detected.
To activate the software the driver has to go through several warning screens that are additional to the warning literature supplied with the car by Tesla. Personally I think "public perception" is completely moot here; you might think the car is (or should be) capable of full autonomy but I don't and I suspect that lots of other people don't either.
In any case the manufacturer very specifically stated that it wasn't. Things implied by the-internet-people and things stated by the OEM are two very different worlds, thankfully.
Looks like an illegal trailer anyway. Where's the stop bar?
Looks like an illegal trailer anyway. Where's the stop bar?
Regardless, what is the point of these so-called "stop bars"?
At highway speeds that weak little bar cannot hope to stop a 3000lb+ hunk of metal hitting it. Which, in most cases, leads to a decapitated occupant.
Actually, that's the point of stop bars. They're supposed to be strong enough to withstand a hit like that. Most of them aren't. It's something that I think should be regulated. (gasp!)
I'm not sure you really have much to worry about. Compared to even an empty big rig your Miata will stop on a dime. Doesn't account for the person behind you obviously, but I'd not worry about avoiding other traffic like the plague just because you're driving a smaller car.Now that I have a Miata i'm hyper sensitive about following a big rig. I'll altogether avoid it by changing lanes or keeping an ultra safe distance.
I'm not sure you really have much to worry about. Compared to even an empty big rig your Miata will stop on a dime. Doesn't account for the person behind you obviously, but I'd not worry about avoiding other traffic like the plague just because you're driving a smaller car.
Of course, keeping a safe distance should be part of day-to-day driving anyway. Mentioning it specifically here suggests you didn't before...