Driverless Audi RS 7 at racing speeds at Hockenheim

  • Thread starter RewindTape
  • 211 comments
  • 8,346 views
Conceit and needlessly definitive statements aside, I agree with RC45 that the computers will screw up -- it's what computers do -- and if autonomous cars penetrate the market quickly enough before any major incidents occur, there is potential for disaster.

Even the primitive intervention systems that have been in use for decades glitch out or make wrong "decisions", sometimes risking bodily harm or death. I would expect any car with autonomous systems to require them to be disabled for safety, physically cutting them out if necessary, by the time the vehicle turns 10-15 years old. This is already the reality for some of us who drive aged '90s cars with comparatively simple things like ABS or traction control.

Sure, CPUs are basically better than us, but electronic sensors are vulnerable and even more fallible than we are, and the CPU relies upon them. While it's true that plain old mechanical things like brakes can fail, I find it foolhardy to layer another potential failure point over the top of those mechanical components, with the capacity to override the driver and cause an accident when there's nothing else wrong with the brakes, engine, or steering.

I think some of you guys are too busy embracing the possibilities to consider the harm that an entire fleet of sensor-driven automobiles could cause. I don't think people are "lazy" for accepting this, I'm not worried about being forced into an autonomous car within my lifetime, and I would love nothing more than to allow people who are uninterested in driving to quit ruining things for the rest of us. But I fully expect this technology to make mistakes, possibly on an outrageously frequent basis.
 
Conceit and needlessly definitive statements aside, I agree with RC45 that the computers will screw up -- it's what computers do -- and if autonomous cars penetrate the market quickly enough before any major incidents occur, there is potential for disaster.

Yes it will make mistakes, but will it make more or less mistakes than human drivers? My guess is it will make less, far less. Computers aren't distracted by the millions of things that distract drivers every time they get behind the wheel, so taking away the distraction will make everything safer, even if it's not 100% safer. I think you're forgetting that most of the people that are on the road don't have the foggiest idea what they're doing, having a computer drive for them will make us all safer.
 
I embrace autonomous driving technology, but I do have my doubts about it. When humans make mistakes, they learn from them and don't repeat the same mistake over and over. When a machine makes a mistake, it potentially may not have the programming to understand the mistake and correct it, thus repeating it. I do agree that machines are not distracted by things that humans are, but there are always chances that the sensors will fail. What happens when hundreds of cars on a highway, all traveling at high speeds on autopilot, suddenly lose all use of their sensors? I understand that the owners may be able to react, and there will be safeguards in the future, but you need to think about the "What If's?" that will always be there. With cars becoming more hackable, what happens when a hacker sends out a program that will instantly disrupt a car's autopilot system, or make the car travel at dangerous speeds (Too fast, too slow)?
 
What happens when hundreds of cars on a highway, all traveling at high speeds on autopilot, suddenly lose all use of their sensors? I understand that the owners may be able to react, and there will be safeguards in the future, but you need to think about the "What If's?" that will always be there.
What ifs become increasingly irrelevant as they get more unlikely. Sure it is possible that every car fails, just like it's possible that every driver has a heart attack at the same time.

It's a math game, when the cars fail less than the drivers you're not risking anything by removing the driver as far as the big picture goes. The car manufacturers don't want people to think they're selling death traps so you can be sure that they're doing the math.

With cars becoming more hackable, what happens when a hacker sends out a program that will instantly disrupt a car's autopilot system, or make the car travel at dangerous speeds (Too fast, too slow)?

Security companies get more business trying to prevent that. You can ask this about any technology (and you should), but just because there is a problem doesn't mean it can't be solved. You've listed a valid concern, but with such little detail that these isn't really a way to assess it.
 
That is the point. When Audis automated cars actually do start killing people without human assistance we can chat about what a great company they once where.

I hesitate to stab at where exactly your confusion begins but I don't believe anybody has said that autonomous anything is ultimately safe. If you think people will no longer die in accidents (or that such a claim is being made) then you've completely misunderstood.

Audis will continue to feature in the deaths of people as will cows, Chevrolets, fishbones and Ebola. We can agree on that and I look forward to chatting with you about it.

Your argument against autonomy seems to be that you cannot conceive of a computer being able to control a vehicle in a dynamic road environment.

What if I told you I could train you a driver, a special driver. A driver who could look out of every side of the car at once while planning the route, monitoring the vehicle and reacting at photon-speed to events? What if I told you that driver could drive 200 cars at once in the same area? Or 200,000... all acting in coordinated unison.

You're overlaying your idea of a human model onto the code model but there's no way one would actually design the system to work like a human.

The human element will come (as you touched upon, iirc) when the system's priorities for unavoidable collisions are set.
 
Conceit and needlessly definitive statements aside, I agree with RC45 that the computers will screw up -- it's what computers do -- and if autonomous cars penetrate the market quickly enough before any major incidents occur, there is potential for disaster.

Even the primitive intervention systems that have been in use for decades glitch out or make wrong "decisions", sometimes risking bodily harm or death. I would expect any car with autonomous systems to require them to be disabled for safety, physically cutting them out if necessary, by the time the vehicle turns 10-15 years old. This is already the reality for some of us who drive aged '90s cars with comparatively simple things like ABS or traction control.

Sure, CPUs are basically better than us, but electronic sensors are vulnerable and even more fallible than we are, and the CPU relies upon them. While it's true that plain old mechanical things like brakes can fail, I find it foolhardy to layer another potential failure point over the top of those mechanical components, with the capacity to override the driver and cause an accident when there's nothing else wrong with the brakes, engine, or steering.

I think some of you guys are too busy embracing the possibilities to consider the harm that an entire fleet of sensor-driven automobiles could cause. I don't think people are "lazy" for accepting this, I'm not worried about being forced into an autonomous car within my lifetime, and I would love nothing more than to allow people who are uninterested in driving to quit ruining things for the rest of us. But I fully expect this technology to make mistakes, possibly on an outrageously frequent basis.

All of my tongue in cheek supercilious and overtly super hyperbolic over the top statements aside, you have astutely summed up the major flaw in any system that relies on masses of artificial sensory input.

I believe you have hit the nail on the head - champions of the concept are blind to the reality of implementation.

And yes, even Audi are a profit driven corporate entity that has been known to promote and distribute flawed equipment only to be forced to later recall and fix their screw-ups.

Both you and I have no doubt that the compute power and programming logic exist to create autonomous control systems for vehicles. We however are confident in our disbelief that a suitably faultless sensory system could be built to allow the aforementioned autonomous control system to be reliable enough for general implementation and that the 'system' could safely and correctly account for every possible failure scenario.

Simply observe how vulnerable all modern cars are to across the board electrical hiccups due to loose ground or slight over or under voltage conditions or even resistance changes over time that such sensors suffer- or just a bad battery.

Even the 'simple' electronic throttle with multiple redundant sensors will shut the engine down if the 'system' cannot verify pedal position accurately within a certain time span. And most times the cause of the error is an anomalous resistance value that is cured with a part swap.

Accurately assessing the input of proximity sensors, yaw sensors, accelerometers, wheel speed sensors, attitude sensors, steering wheel position sensors, actual steering position etc. then comparing all the data to the intended steering position vs. current wheel position vs. actual measured yaw vs. current 4 wheel speeds vs. side G-loading vs. ground speed vs. throttle position 1000 times per second in order to determine the next adjustment to make are all easily accounted for by modern CPU's and operating systems - however when this entire system is hinging on a particular feedback value meeting a predetermined range and this test or check fails it would put the system into a failed mode - that is the point at which all your automated bases are belong to us...
 
"Don't be silly Meredith, we don't need a motor car. Our horse works just fine and is more robust!"


Edit. :lol:
Keep on digging.
 
Last edited:
"Don't be silly Meredith, we don't need a motor car. Our horse works just fine and is more robust!"

Unable to dispute my opinion on a factual level you resort to this type of tripe response.

You really have no idea how the modern automobile works at a network, bus, sensor and CPU level do you?
 
Both you and I have no doubt that the compute power and programming logic exist to create autonomous control systems for vehicles. We however are confident in our disbelief that a suitably faultless sensory system could be built to allow the aforementioned autonomous control system to be reliable enough for general implementation and that the 'system' could safely and correctly account for every possible failure scenario.

Autonomous systems don't need to be flawless, they just need to be safer and have less faults than a human driver for them to be a success. If autonomous cars, or at least cars with autonomous features in them, reduce the overall percentage of accidents and fatalities, then everyone wins. Even though with cars that are older without autonomous modes will benefit from lower insurance premiums due to less accidents.
 
Unable to dispute my opinion on a factual level you resort to this type of tripe response.

You really have no idea how the modern automobile works at a network, bus, sensor and CPU level do you?
Well now. Looks like you've finally got something right after citing everything you posted before hand was fact.

GoldStar.jpg
 
Now I'm just too afraid to get in a car for fear of catastrophic failure of all components!

No need to be - just be aware that even the most modern control system relies on the integrity of the sensory input. And there are many scenarios where the final outcome is not what the designers and programmers intended.

Take the ultra-modern ABS/AH/TCS systems with built in cold weather Ice-Mode algorithms.

They do not know what do in a high speed approach on a bumpy turn on a dry road.

If you drive into a bumpy high speed turn and brake very hard you will encounter a point where 1 or more of the wheels will leave the road surface, the wheel will instantly lock up and then the system will note that one of the wheels has now either stopped turning or the wheel speed sensor has failed.

if the wheel speed sensor has failed it goes into 'shut down mode' and the driver will lose effective brake control and the car will careen off the road.

If the system is able to determine that the wheel speed sensor has not failed, but that the other wheels are still rotating at various speeds and the car is also experiencing some yaw/lateral acceleration and the driver is turning the steering wheel - maybe even opposite to the intended path of the vehicle - the system will deduce the car is entering into a flat spin on ice and will modulate the brakes and prevent the car from slowing any further.

This would be great if the car was on ice and in a spin, however, if the car is in fact entering a bumpy high speed turn hard on the brakes the result of the 'automated control systems' interpretation of the sensory input would be to put the driver into the wall.

The above scenarios happen a lot more often than you would care to believe.

The reason is that the folks who design and program the systems do so for the most common simple scenarios, not the unique scenarios - for those scenarios they recommend you TURN OFF the control systems and rely on... **gasp** HUMAN control ;)
 
They do not know what do in a high speed approach on a bumpy turn on a dry road.

If you drive into a bumpy high speed turn and brake very hard you will encounter a point where 1 or more of the wheels will leave the road surface, the wheel will instantly lock up and then the system will note that one of the wheels has now either stopped turning or the wheel speed sensor has failed.

You know that scenario is probably pretty rare, and even then whether you have sensors doing the work for you or you're doing your own work, there's a pretty good chance you're going to wreck if you're taking a turn at high speed on a bumpy road. No amount of training or input is going to save you if you're driving above the limits of the road, your car, and your abilities.
 
You know that scenario is probably pretty rare, and even then whether you have sensors doing the work for you or you're doing your own work, there's a pretty good chance you're going to wreck if you're taking a turn at high speed on a bumpy road. No amount of training or input is going to save you if you're driving above the limits of the road, your car, and your abilities.

You know that scenario is probably pretty rare, and even then whether you have sensors doing the work for you or you're doing your own work, there's a pretty good chance you're going to wreck if you're taking a turn at high speed on a bumpy road. No amount of training or input is going to save you if you're driving above the limits of the road, your car, and your abilities.
Took Google .5s to come up with this video demonstration of an example of Ice Mode intervention causing issues.


These scenarios comes up and competent race car drivers have wrecked because the systems failed at a logic/sensory input level. These scenarios occurred in closed controlled situations where the cars and drivers in question where not driving beyond eithers capabilities.

The point is that as the system become more interdependent on each other the level of catastrophic failure increases as smaller and smaller components experience either failure or false input.

These are the facts and realities of the world. I am not making this up, just providing you with the information.

Are you ok with the 'odd scenario' coming up while you and your family are in that automated car?

You will have no problem being 'the statistic'?
 
Last edited:
These are the facts and realities of the world. I am not making this up, just providing you with the information.

Where are you getting your facts from? They seems awful skewed and biased, can you please provide a source so we look and form our opinion if it's biased or not?

Are you ok with the 'odd scenario' coming up while you and your family are in that automated car?

You will have no problem being 'the statistic'?

Considering a computer in those situation will be able to act quicker than me and give me better chance to avoid the wreck all together, then yes I'm ok with it. Chances are if it's in that hairy of a situation where the car can't save itself, then I probably wouldn't have been able to do much to avoid the accident anyways. Also, cars are, for the most part pretty safe. If an accident were to occur, I feel like I would be reasonably safe.
 
You really have no idea how the modern automobile works at a network, bus, sensor and CPU level do you?

You don't.

The issue with telling everybody how smart and educated you are and alluding heavily to real-world expertise is that once we've Googled you, we can find out how much you really know.

You don't enough about computers to make anything better than cheesy early 2000's ads for your dealership. Why would I think that you have any clue how modern sensors and driving algorithms work? This goes double when you talk about facts that you haven't actually shown us. :lol:
 
Last edited:
Take the ultra-modern ABS/AH/TCS systems with built in cold weather Ice-Mode algorithms.
Your entire example is proposed as if these systems can automatically detect & correct imperfections on the spot from a driver error of approaching a corner too fast.

They're called electronic aids; it is still up to the driver to decide how to approach the situation. Race drivers are not exempt from this even in the most technologically advanced cars.

Keep digging in your garbage bin of scenarios at work for examples.
 
You don't.
And now I am supposed to respond 'I do' so you can get your infantile rocks off on a 'I do' 'You don't' exchange?

Sorry to disappoint you, but not only do I understand how these systems interact and communicate - I also understand the entire premise of the programming behind the scenes. I am actually a programmer by trade - and my background is in 1st, 2nd and 3rd gen languages - none of this 4th and 5th gen auto code-generating nonsense. F5 to recompile on the fly and thinking bugs are just part of the landscape.

The issue with telling everybody how smart and educated you are and alluding heavily to real-world expertise is that once we've Googled you, we can find out how much you really know.
Then google and prove me to be wrong. I guess in your mind my defending myself from attacks is heinous - but your defending your attacking position is noble. Hypocrite much?

You don't [know] (sic) enough about computers to make anything better than cheesy early 2000's ads for your dealership. Why would I think that you have any clue how modern sensors and driving algorithms work? This goes double when you talk about facts that you haven't actually shown us. :lol:
WTF are you spewing here? Dealership? WTF are you rambling about? You think I am somehow involved in buying and or selling cars? LOL - what a joke. Talk about you not having a clue.

Understanding the mechanism of modern automobile sensor based control systems is not hard. You thinking that it is some sort of black art is the real kicker - you some how think these Audi engineers are magicians? What they have pulled off is simply achieved with current hardware and software. The tough part is going to be the implementation of it into the general civilian automobile population at any level of reliability considering how reliant these systems are on sensory input.

You may be familiar with the concept of junk in junk out.

Your entire example is proposed as if these systems can automatically detect & correct imperfections on the spot from a driver error of approaching a corner too fast.
Pretty short sighted of you there. How is entering a corner at high speed driver error? In your utopia of automated control nothing ever goes wrong - period. After all, software in your mind is perfectly programmed and hardware sensor interfaces never fail.
They're called electronic aids; it is still up to the driver to decide how to approach the situation. Race drivers are not exempt from this even in the most technologically advanced cars.
Exactly - electronic aids - and in this case the electronic aid INCORRECTLY interprets the sensory input resulting in completely the wrong result.
Not withstanding the fact that ABS assisted cars to take longer to slow down on ice than non-ABS cars.
How about when the input sensor fails - then what?
Keep digging in your garbage bin of scenarios at work for examples.
How about the front wheel drive car pulling out into traffic and the car experiences a patch of dirt or ice and then goes on the manage the wheel spin and bogs the car to regain composure, leaving the car in the path of oncoming traffic.
These are not fictional scenarios BTW.
But what would I know, in your mind I am a car salesman. :)

In the case of the automated vehicle not only relying on the sensory grid for self control but also for group control, you can see how critical the maintenance of sensory integrity is - right?
 
Last edited:
And now I am supposed to respond 'I do' so you can get your infantile rocks off on a 'I do' 'You don't' exchange?

No, you're supposed to read the rest of the post. It's called an introduction.

Sorry to disappoint you, but not only do I understand how these systems interact and communicate - I also understand the entire premise of the programming behind the scenes. I am actually a programmer by trade - and my background is in 1st, 2nd and 3rd gen languages - none of this 4th and 5th gen auto code-generating nonsense. F5 to recompile on the fly and thinking bugs are just part of the landscape.

Meaningless claims. This is the internet where everyone is an astronaut and Navy SEAL.

Then google and prove me to be wrong.

Not how proof works.

I guess in your mind my defending myself from attacks is heinous - but your defending your attacking position is noble. Hypocrite much?

No. The pro side to this argument has posted evidence, the OP in this thread for instance. You have posted nothing of value as is par for the course in your arguments on this board.

WTF are you spewing here? Dealership? WTF are you rambling about? You think I am somehow involved in buying and or selling cars? LOL - what a joke. Talk about you not having a clue.

Hm, well I googled RC45 and Corvette and found you on several other boards. I took from this thread that the RC in your name stood for "Rick Conti".

If I'm wrong then I apologize.

Understanding the mechanism of modern automobile sensor based control systems is not hard.

That's not what this thread is about. It's about whether or not sensor based control can be superior to human control. The fact is that it can be and has already proven to be in several cases. You fail to admit this and it's a big thorn in your argument.

My self driving car won't need to work in the snow. It won't need to work in gravel, it won't need to work in Afghanistan.

It only needs to work where I commute, and that for me and millions of other people is sunny streets and highways.

But we've said this repeatedly and you've failed to learn. Figures.

You thinking that it is some sort of black art is the real kicker - you some how think these Audi engineers are magicians? What they have pulled off is simply achieved with current hardware and software.

That's funny, I didn't say anything that would suggest that. You're even better at pulling crap from thin air than I am!

The tough part is going to be the implementation of it into the general civilian automobile population at any level of reliability considering how reliant these systems are on sensory input.

Which is something Google and Tesla have been doing for a while now with good results.

You may be familiar with the concept of junk in junk out.

In many cases the inputs that the computer would get are the same ones a human would get but with the addition of GPS, pre-programmed data, and traffic reports.

This is in addition to the fact that computers don't get tired, distracted, etc.
 
How about when the input sensor fails - then what?
It's hard to answer without a specific system to look at. Most likely, there would be a layer of redundancy. Maintenance would also be a factor. We're starting to see machinery that can self diagnose problems and assist with maintenance, you might see these automatic cars come along with a kit for the garage that eases inspections for the owner.

How about the front wheel drive car pulling out into traffic and the car experiences a patch of dirt or ice and then goes on the manage the wheel spin and bogs the car to regain composure, leaving the car in the path of oncoming traffic.
As above, this is pretty vague as is. It's something you can see happening to a car with no computer though. If the car can communicate with other vehicles, then the incoming traffic might just get out of the way before it's an issue.
 
Pretty short sighted of you there. How is entering a corner at high speed driver error?
How is it not? If the corner is bumpy & uneven, there's only so much speed to carried through it safely.

In your utopia of automated control nothing ever goes wrong - period. After all, software in your mind is perfectly programmed and hardware sensor interfaces never fail.
Nobody has said machines are invincible, but you have yet to prove how they're more dangerous than humans behind the wheel. All your examples lead to the result of human error, not computer. You've basically continued to prove everything you're against through your examples.

Exactly - electronic aids - and in this case the electronic aid INCORRECTLY interprets the sensory input resulting in completely the wrong result.
Not withstanding the fact that ABS assisted cars to take longer to slow down on ice than non-ABS cars.
How about when the input sensor fails - then what?
Just as with the first example, a computer on autopilot would be built with radar to observe conditions ahead & make correct adjustments.

Your examples that you continue to go extreme & rare conditions to throw computers into a "kill all humans" ideology, also continue to be completely avoidable by the human placing it into that situation.
How about the front wheel drive car pulling out into traffic and the car experiences a patch of dirt or ice and then goes on the manage the wheel spin and bogs the car to regain composure, leaving the car in the path of oncoming traffic.
These are not fictional scenarios BTW.
Fictional or not, they're extreme scenarios that would put man or machine into the high chance of disaster. The issue is you keep ignoring all your scenarios are a result of a human placing a machine into the situation, and thus somehow, the machine is at fault for killing someone.

Here's one of your silly scenarios. A person approaches an intersection too fast & kills someone. Whose fault is it? The driver or the computer for not acting quickly enough? That's how your scenarios are all playing out.
 
Last edited:
It's hard to answer without a specific system to look at. Most likely, there would be a layer of redundancy. Maintenance would also be a factor. We're starting to see machinery that can self diagnose problems and assist with maintenance, you might see these automatic cars come along with a kit for the garage that eases inspections for the owner.

Most likely, would, might. Not very confidence inspiring considering that automobile companies are profit driven and will see the lowest legal common denominator.

layer of redundancy, self diagnose problems and assist with maintenance,automatic cars come along with a kit for the garage. This is exactly what one of my ignored comments was - that the level of mandated redundancy to bring these terrestrial civilian automated systems up to the level of reliability that commercial aviation and military systems have will be cost prohibitive.

Why are you obsessed with the idea of automated entry level miniscule people carriers that may carry huge purchase prices just to meet minimum safety standards.

Again, in that scenario it is a solution looking or a problem. The same way the $40,000 hybrid is moronic. There is no savings in fuel costs of the car is 2 x the price of a similar gasoline model. Unless the price of gasoline is artificially increased to make the hybrid appear cheap.

Same logic here - what is the point of these automated vehicles if they cost 10x more than the simple analog car?

Just to save 1 life - at any cost? LOL

As above, this is pretty vague as is. It's something you can see happening to a car with no computer though. If the car can communicate with other vehicles, then the incoming traffic might just get out of the way before it's an issue.
Another of my previous points being promoted I see - the simple idea that in order for the system to work, all vehicles need to be automated.

Thanks for picking my points out 1 by 1 and promoting them.

Just as with the first example, a computer on autopilot would be built with radar to observe conditions ahead & make correct adjustments.
And you point is what? How about when the sensory input is either misinterpreted or simply fails? Each layer of sensor you add, adds a another point of failure. The more complex your input matrix the more complex your error checking needs to be - the more likely single sensor failure is going to cause a complete system failure. Unless you build in more redundancy and by definition more expense.

See where this going?
 
And you point is what? How about when the sensory input is either misinterpreted or simply fails? Each layer of sensor you add, adds a another point of failure. The more complex your input matrix the more complex your error checking needs to be - the more likely single sensor failure is going to cause a complete system failure. Unless you build in more redundancy and by definition more expense.

See where this going?
And yet you still completely ignore all your examples being exposed as human errors than machine errors once again. :lol:

The point is to show you the difference between how a machine would actually handle the situation rather than a human throwing it into it thanks to their miscalculations.

Is this going the same way you were stating before automated cars never coming to fruition before Audi made a mockery of you? Even Tesla laughs at your "correct and educated observation that in no way shape or form is automated civilian general automotive transportation viable or achievable".


Not 100% computer controlled, but intelligent enough to handle speeds, corners, & objects in front of it without human interference if asked. The technology has to start somewhere, and this is close enough to what Audi envisions.
 
Most likely, would, might. Not very confidence inspiring considering that automobile companies are profit driven and will see the lowest legal common denominator.
No, we're not talking about specific systems so I can't say for certain what will happen.

Also going for profits means avoiding lawsuits and going out of business.

layer of redundancy
, self diagnose problems and assist with maintenance,automatic cars come along with a kit for the garage.
This is exactly what one of my ignored comments was - that the level of mandated redundancy to bring these terrestrial civilian automated systems up to the level of reliability that commercial aviation and military systems have will be cost prohibitive.
There is nothing about cost there. If you want to factor cost in you can, and as with most technology costs will go along with maturity. You might only see high end cars driving themselves (like an Audi) at first. Then a couple of generations later lower end models may catch on.

The only thing I mentioned that I would even worry about would be the garage maintenance kit. The others either already exist in some form or wouldn't be terrible difficult to add in, even if they might not be immediate options on base model Civics.

Why are you obsessed with the idea of automated entry level miniscule people carriers that may carry huge purchase prices just to meet minimum safety standards.
I don't think that anyone is. The thread has been fairly abstract, which is why the idea of auto driving cars being unrealizable was shot down. It's a very premature claim.

That doesn't imply that the technology is ready now to be implemented everywhere. It will come in steps.

Again, in that scenario it is a solution looking or a problem. The same way the $40,000 hybrid is moronic. There is no savings in fuel costs of the car is 2 x the price of a similar gasoline model. Unless the price of gasoline is artificially increased to make the hybrid appear cheap.
I don't see how the value isn't self evident. People pay other people to drive cars for them, people pay millions for exclusive cars. If the first generations of autodrivers are expensive that won't necessarily stop them from selling.

There is nothing wrong with a $40,000 hybrid either for the record. If it's too expensive to produce as a commuter model, you can sell it as something else until the price comes down. Hybrid and luxury aren't mutually exclusive.

Same logic here - what is the point of these automated vehicles if they cost 10x more than the simple analog car?

Just to save 1 life - at any cost? LOL
Being tens time more expensive now doesn't mean they will be so expensive later.


Another of my previous points being promoted I see - the simple idea that in order for the system to work, all vehicles need to be automated.
That's still not true. If all the cars on the road were fully integrated, they could scoot around the problem before it was a problem. That is one possible solution to the issue, not the only solution.

Also, the cars don't even have to be automated for something like this to work. The vehicle intending to pull out could just send a signal to all surrounding cars, including ones with human drivers. This is done on some roadways without the cars even entering the equation, you have warning lights telling on coming traffic that a car is going to merge.
 
Just as with the first example, a computer on autopilot would be built with radar to observe conditions ahead & make correct adjustments.

And you point is what? How about when the sensory input is either misinterpreted or simply fails? Each layer of sensor you add, adds a another point of failure. The more complex your input matrix the more complex your error checking needs to be - the more likely single sensor failure is going to cause a complete system failure.
And yet you still completely ignore all your examples being exposed as human errors than machine errors once again. :lol:

The point is to show you the difference between how a machine would actually handle the situation rather than a human throwing it into it thanks to their miscalculations.

Is this going the same way you were stating before automated cars never coming to fruition before Audi made a mockery of you? Even Tesla laughs at your "correct and educated observation that in no way shape or form is automated civilian general automotive transportation viable or achievable".


Not 100% computer controlled, but intelligent enough to handle speeds, corners, & objects in front of it without human interference if asked. The technology has to start somewhere, and this is close enough to what Audi envisions.


Strawman much?

How would the Tesla manage a sensor failure?

I don't see how the value isn't self evident. People pay other people to drive cars for them, people pay millions for exclusive cars. If the first generations of autodrivers are expensive that won't necessarily stop them from selling.
The same people that pay $99 a month for 60 months to afford a Kia entry level?
 
Strawman much?

How would the Tesla manage a sensor failure?
Nope. If the machine was in control of the car in your situation from the get-go, then would you have a valid scenario. For probably the thousandth time however, you choose to ignore the flaw in all your examples; human error puts the machine in an unfortunate predicament, but thus, it is the machine's fault for not playing Superman. Your only source of "facts" remains this & only this train of thought. For someone so above us simpletons, it's amazing you're incapable of supporting your argument in any other way than dream of rare & extreme situations that have been the result of humans thus far.

The Tesla allows the driver to regain control at any time. If a sensor fails, the driver can decide how intervene. But, it's pointless answering because you've already decided this is what happens:
tumblr_m48aj2h6oR1ql2qn4o1_500.gif
 
Nope. If the machine was in control of the car in your situation from the get-go, then would you have a valid scenario. For probably the thousandth time however, you choose to ignore the flaw in all your examples; human error puts the machine in an unfortunate predicament, but thus, it is the machine's fault for not playing Superman. Your only source of "facts" remains this & only this train of thought. For someone so above us simpletons, it's amazing you're incapable of supporting your argument in any other way than dream of rare & extreme situations that have been the result of humans thus far.

The Tesla allows the driver to regain control at any time. If a sensor fails, the driver can decide how intervene. But, it's pointless answering because you've already decided this is what happens:
tumblr_m48aj2h6oR1ql2qn4o1_500.gif

So you propose the automated car to eliminate human error, but when the automated system suffers a sensory failure your pass control to the human?

Why bother arresting control from the human in the first place then?

Forget unfortunate predicament - what about under normal circumstances if the automated car suffers a sensory failure? After all, even at norm al speeds the car is deaf, dumb and blind if it suffers a sensory input failure and all the logic in the world cannot help an input less computer.

You do understand how input driven systems operated right?
 
These scenarios comes up and competent race car drivers have wrecked because the systems failed at a logic/sensory input level. These scenarios occurred in closed controlled situations where the cars and drivers in question where not driving beyond eithers capabilities.

[citation needed]

Are you ok with the 'odd scenario' coming up while you and your family are in that automated car?

You will have no problem being 'the statistic'?

'Odd scenario' can accurately describe the failure of any of the other thousands of parts of a car that could malfunction at any point.

So you propose the automated car to eliminate human error, but when the automated system suffers a sensory failure your pass control to the human?

Why bother arresting control from the human in the first place then?

What are you, allergic to reading? Automation isn't about eliminating human error, it's about minimizing error, period.

No, you still haven't addressed this. Either start backing up your ridiculous claims, or stop making them.
 

Latest Posts

Back