GTPNewsWire
Contributing Writer
- 21,867
- GTPHQ
This is the discussion thread for a recent post on GTPlanet:
This article was published by Joe Donaldson (@Joey D) on November 13th, 2017 in the Automotive News category.
No, it doesn't.Driverless vehicles are coming as we know. And somebody pointed out…that they will have to make from time to time, ethical decisions. ‘You’re heading towards an accident; it’s going to be fatal. The only solution is to swerve onto the pavement. But there are two pedestrians there. What does the vehicles do? ‘Basically you will have bought a vehicle that must be programmed in certain situations to kill you. And you’ll just have to sit there…and there’s nothing you can do. ‘These driverless vehicles , everybody goes ‘oh aren’t they clever they can stop at red lights’. They are going to have to face all sorts of things like who do I kill now. [Humans] are programmed to look after ourselves and these driverless vehicles are going to be programmed to do the maths, and say, lots of people over there, I’m going to kill you.’
People get drunk, which degrades their ability to look after anything, and still drive. They also get in cars and go insane on occasion:Driverless vehicles are coming as we know. And somebody pointed out…that they will have to make from time to time, ethical decisions. ‘You’re heading towards an accident; it’s going to be fatal. The only solution is to swerve onto the pavement. But there are two pedestrians there. What does the vehicles do? ‘Basically you will have bought a vehicle that must be programmed in certain situations to kill you. And you’ll just have to sit there…and there’s nothing you can do. ‘These driverless vehicles , everybody goes ‘oh aren’t they clever they can stop at red lights’. They are going to have to face all sorts of things like who do I kill now. [Humans] are programmed to look after ourselves and these driverless vehicles are going to be programmed to do the maths, and say, lots of people over there, I’m going to kill you.’
That potentially makes them dangerous to everyone outside the vehicle. They should just avoid actively putting anyone in danger.Driverless vehicles should always consider saving their occupants as their main priority in the event of a crash.
And that's if the car even gets into a situation where such an accident is likely.
And that's if the car even gets into a situation where such an accident is likely.
I like this anti-AVs theory that somewhere in the software of an autonomous vehicle will be an algorithm for whether to clatter through a triangle of cheerleaders or whether to plunge you off the edge of the Grand Canyon, when the software is actually designed to stop the car from getting into a situation where such a decision is required in the first place.
I have difficulty imagining a real scenario when there isn't a third option with less dismemberment, and a computer with gigs of ram should be able to find that third option rather more quickly than our lizard brains. I imagine in most cases the car will just be instructed to brake 100%, as swerving is seldom advisable in any case. If that's not enough, then it's likely it was unavoidable.
What is this based on?Thing is a human could do some crazy steering stuff and achieve 'the third option' but the computer couldn't because the numbers wouldn't make sense. Extreme manoeuvres are not logical to a computer because they can't be computed with any certainty. So where a human swerves and somehow gets lucky the computer wouldn't even attempt that move because there no such thing as 'luck'. So as a result your ending up in the pole.
What is this based on?
If you are just talking chance and the computer ignores the unlikely options, that is good. You'd have 1/10 piloted cars that escape a disaster by doing something that doesn't make sense vs 9/10 AI cars that make it out safely because they don't choose the statistically unfavorable option. Some self driving cars will crash but that's not even remotely scary when human driven cars crash a lot.
Practically though, I don't think I can agree with you at all, at least in the long run. AI is already faster at calculating than people and it will continue to get faster. It will also get smarter and possibly begin to learn on its own to account for the most unlikely situations.
What is this based on?
If you are just talking chance and the computer ignores the unlikely options, that is good. You'd have 1/10 piloted cars that escape a disaster by doing something that doesn't make sense vs 9/10 AI cars that make it out safely because they don't choose the statistically unfavorable option. Some self driving cars will crash but that's not even remotely scary when human driven cars crash a lot.
Practically though, I don't think I can agree with you at all, at least in the long run. AI is already faster at calculating than people and it will continue to get faster. It will also get smarter and possibly begin to learn on its own to account for the most unlikely situations.
Read back what you've just written, but very slowly and thinking about each point.Thing is a human could do some crazy steering stuff and achieve 'the third option' but the computer couldn't because the numbers wouldn't make sense. Extreme manoeuvres are not logical to a computer because they can't be computed with any certainty. So where a human swerves and somehow gets lucky the computer wouldn't even attempt that move because there no such thing as 'luck'. So as a result your ending up in the pole.
Certainly, let's steer the car into a crowd of people just to save one or two passengers.Driverless vehicles should always consider saving their occupants as their main priority in the event of a crash.
Certainly, let's steer the car into a crowd of people just to save one or two passengers.
Think this first trip is an example of how AI can't react as it should.
In an ideal world AI should be like that but it never will be. Think this first trip is an example of how AI can't react as it should. Not sure how it will get better with so much going on around us.
This is why I don't think it matters how quickly an autonomous car can process or react to something, or how much machine-learning you throw at the problem. The computer doesn't actually comprehend anything, and it relies on sensors that are too fallible. It can scan for objects, but could it ever infer what drivers or pedestrians are thinking or where they're looking? It can track painted lines, but how reliable could it ever be on washed-out gravel roads or a blanket of snow?That all being said, I think the real problem (possibly intractable) is making autonomous cars work in situations they were not designed to encounter. Do autonomous cars have any sort of intelligence? Or do they just follow pre-programmed commands based on sensor input? I see this as the biggest obstacle for autonomous cars going faster than 20mph.
You're completely misunderstanding my comment.That seems a fundamentally bad idea, your logic is flawed.
AJHG1000Driverless vehicles should always consider saving their occupants as their main priority in the event of a crash.
Certainly, let's steer the car into a crowd of people just to save one or two passengers.
You're completely misunderstanding my comment.
It's about priorities.Of course @AJHG1000 is right: driverless vehicles (or any safety protocol charged with human life) should always consider saving their charges. What sort of system wouldn't consider that?
He said "as their main priority". If it calculated that the ONLY option would be to steer towards an open place with people on it, then it would do so, because it would follow the highest priority in such a case.You seem to have prejudged every case in which such a consideration is made as preferring the lives of the passengers, one might say that you've actually removed the consideration aspect and instead presumed a default outcome. That was logically incorrect, as I've already pointed out.
He said "as their main priority". If it calculated that the ONLY option would be to steer towards an open place with people on it, then it would do so, because it would follow the highest priority in such a case.
This is why I don't think it matters how quickly an autonomous car can process or react to something, or how much machine-learning you throw at the problem. The computer doesn't actually comprehend anything, and it relies on sensors that are too fallible. It can scan for objects, but could it ever infer what drivers or pedestrians are thinking or where they're looking? It can track painted lines, but how reliable could it ever be on washed-out gravel roads or a blanket of snow?
Driving involves a lot more than just reacting quickly and confidently to something. It evokes millennia of evolved and instinctual awareness and comprehension. Computers are figuratively brain-dead, and nowadays we spend all our lives dealing with their screw-ups. The more sophisticated they get, the more trouble they create. It's baffling to me how it could be any different in giving one control over a car.
Personally, I'm not swayed by this. I'd rather be hit by an inattentive driver than a computer that failed to react despite "looking" right at me. Similarly, I'd rather accept responsibility for my own driving over the possibility of being killed or injured in an autonomous car that could not manage a situation I could have managed myself. I can't get over the absurdity and tragedy of such a prospect, however remote the possibility may be. Negligence is human. A machine is oblivious.Is that really all that different from a human driver though? Humans have fallible sensors for sure. I'll give you that we're better at understanding each other than AI for now, but far from perfect at it. Painted lanes don't necessarily help humans anyway. I know that I've seen a fair number of drivers ignore lanes, not to mention lights, stop signs, and turn signals.
There might be a few areas where humans have advantages, but the bottom line is the accident rate. You can't know ahead of time when you're going to end up in a bad situation. If you could know, you would just avoid the problem. The only rational way to lower your chance of harm is to take the statically favorable option. Eliminating human error and replacing it with less likely machine error could achieve that.
We didn't evolve to drive, but we evolved to perceive. A computer will never perceive anything, it can only "see" what it is programmed to interpret from its sensors.Technically we didn't evolve to drive. We're going much faster than evolution shaped us for and we're not in direct control of anything (we operate the car through the wheel and pedals, etc). The machine in this article is doing a fair job despite not being around nearly as long as we have.
Computer have also bested us at complex tasks despite being "braindead". Chess would be one classic example.
We're chemical beings, not mechanical. I don't think we're that relatable and I don't believe intelligence will ever emerge from computers or AI as they're currently designed. But that's getting into a whole other topic, though it may help explain where I'm coming from.You have to remember that we're machines too. Our brains run calculations and is not too far off from an AI's processor conceptually. If intelligence is just an emergent property then it won't be exclusive to humans forever.
Wait, you worked on the Navya?As the guy who built the engine-brakes-transmission on that thing , I'm really happy that, at least, the emergency brake worked well!
Personally, I'm not swayed by this. I'd rather be hit by an inattentive driver than a computer that failed to react despite "looking" right at me. Similarly, I'd rather accept responsibility for my own driving over the possibility of being killed or injured in an autonomous car that could not manage a situation I could have managed myself. I can't get over the absurdity and tragedy of such a prospect, however remote the possibility may be. Negligence is human. A machine is oblivious.
Lots of people are terrible drivers, but I would sooner advocate for another solution like investing in mass transportation or an overhaul of drivers' education.
We didn't evolve to drive, but we evolved to perceive. A computer will never perceive anything, it can only "see" what it is programmed to interpret from its sensors.
Computers are good at singular tasks and operating in controlled environments; chess is one example, autopilot in an airline is another. The way a computer plays chess isn't really what I would call complex, because it just brute-force processes all permutations of a game and chooses the winning series of moves. Similarly, autopilot just manipulates the plane's control surfaces to maintain specified instrument readings. Something relatively taxing for us, but mundane for a computer.
Driving is very chaotic and multifaceted by comparison, unless you restrict it to a limited, controlled environment (like low-speed urban shuttling).
We're chemical beings, not mechanical. I don't think we're that relatable and I don't believe intelligence will ever emerge from computers or AI as they're currently designed. But that's getting into a whole other topic, though it may help explain where I'm coming from.
While this is absolutely hilarious, I don't get the joke about Peugeots.Well, at least in europe, an autonomous driving car could recognize the Peugeot logo and stop immediately