Autonomous Cars General Discussion

Elon Musk has inexplicably trashed LIDAR as a method for Autonmous Vehicle "sight".

Andrej Karparthy, Senior Director of AI, took the stage and explained that the world is built for visual recognition. Lidar systems, he said, have a hard time deciphering between a plastic bag and a rubber tire.

Expensive sensors that are unnecessary. It's like having a whole bunch of expensive appendices. Like one appendix is bad, well how about a whole bunch of them? That's ridiculous. You'll see.

Disregarding that a camera is merely a sensor and 'recognition' relies on pattern recognition regardless of input, I mean, wouldn't you want both? Wouldn't you want the car to understand the precise 3D environment that it is traveling through? Instead of trying to infer it from photography? This looks like a deflection to me, and disingenuous as well. It would be one thing if Tesla had extensively researched and then reached a point with LIDAR where they determined that it wasn't feasible to say this....but I'm not aware that Tesla has tried to use LIDAR at all. I see several different development groups testing autonomous cars around downtown SF on a daily basis. They all have the tell-tale LIDAR setups. I've never seen a Tesla driving itself in an urban setting. I know several people who work in AV research, and all of them say that LIDAR is essential.

Having said that, I still don't think true consumer-grade Level 5 autonomy will ever happen. Levels 3 & 4 are fundamentally unsafe, so they won't (or shouldn't) happen either. That leaves us with Level 01/02, advanced cruise control basically. I don't believe it will go beyond this point before the current business cycle ends, and I don't think development will continue if there is a recession....possibly ever.

I'll be waiting to be proven wrong and will gladly admit it when I am.
 
Seems like I'm the only one who posts in this thread...:lol:

Autonomous vehicle skepticism content ahead:


Automakers Are Rethinking the Timetable for Fully Autonomous Cars

[Ford Motor Co. CEO Jim Hackett]We overestimated the arrival of autonomous vehicles...Its applications will be narrow, what we call geo-fenced, because the problem is so complex.

Krafcik [CEO of Google’s self-driving car unit Waymo] went on to say that the auto industry might never produce a car capable of driving at any time of year, in any weather, under any conditions. “Autonomy will always have some constraints,” he added.

[Sam Abuelsamid principal analyst for Navigant Research, which publishes an extensive annual assessment on the state of automated vehicles. ] There’s no guarantee that we will ever have automated vehicles in the foreseeable future that are capable of operating everywhere, all the time.

Automakers say it’s unlike anything they’ve seen before. “It’s the most engineering-intensive thing ever attempted,” said one automotive executive in an off-the record discussion with Design News. “And you need lots of the world’s best engineers to do it. I’m not talking about tens or hundreds of engineers. It’s in the thousands. We’re talking about billions of dollars.”

I think Geo-Fenced autonomous zones are going to be as far as full autonomy gets, ever. Now that could still be a huge deal. I'm thinking Highway 5 in CA (the main artery that connects SoCal to NorCal + the entire central valley) could possibly be Geo-Fenced, allowing people to drive to the 5 themselves, and then let the road essentially ferry them down to LA. Other high volume, long distance corridors like I35, I10, and I45 in Texas could be similar candidates. Perhaps even the entire interstate highway system could be upgraded/geofenced in time. To me that makes quite a bit of sense from an efficiency point of view, and the infrastructure & cars could be developed in a standardized & complementary way. But there's no way rural counties will ever improve their broken-ass roads enough to be reliably/safely useable by autonomous cars, and I just don't see how autonomous cars could ever be remotely safe during snowfall.

If there is a recession in the next year or two, I have a feeling that most, if not all every single autonomous car development program will be axed. And when that happens, I sincerely doubt they will start back up again until 2025 or 2030, if they even do. That's my prediction.
 
Geo-fencing is definitely a good way to ease into autonomy, especially with where the technology is at right now, but even with all the problems Tesla has with their system, they've been able to get it to work quite well without it.



And I definitely agree, if there's ever a recession, autonomous car development will definitely be put on hold.
 
Geo-fencing is definitely a good way to ease into autonomy, especially with where the technology is at right now, but even with all the problems Tesla has with their system, they've been able to get it to work quite well without it.



And I definitely agree, if there's ever a recession, autonomous car development will definitely be put on hold.


I think the deceptive thing about autonomy is that the difficulty curve probably resembles an exponential expression, rather than a linear one.
(Y axis should be Time/effort, X axis should be Level 5 completion %):

1*moWcQhCC3MtdLk6WRnRTRA.png


Getting a car to accelerate autonomously is actually quite easy. Getting an AV to the point where it can handle an effectively unlimited set of variables/circumstances I think is actually functionally impossible. How do you design for an open system?

I think what's happened is that early and apparently substantial progress has lended a sense of eventuality to AV development...a linear development curve. When I think in reality, getting 80% there (which I think the Tesla video demonstrates) is probably more like 2% of the work. (To reference the graph again, the Tesla video you posted probably represents progress just above the "T" in time on the green curve)

To have, as the unnamed executive claims, thousands (probably more like 10s of thousands) of engineers working on the problem at the same time, and to be 10 years out from a meaningful rollout is kind of insane. Getting to the Moon required far less man power.

I wonder what the total development expenditure for AV's will be? It could honestly be the most expensive human endeavor, ever.
 
I wonder what the total development expenditure for AV's will be? It could honestly be the most expensive human endeavor, ever.

They're teaching themselves, and not just on the road, in internal simulation. They use machine learning techniques which improve exponentially over time (like your chart). Here's a good explanation.



Honestly this is probably going to come faster and cheaper than people predict even today.
 
Still too early to say, "I told you so," but it's nice to at least see some humility for a change, instead of the usual pie-in-the-sky technophilic conviction that fully-autonomous cars are right around the corner. I think your assessment of the difficulty curve is apt, @Eunos_Cosmo.

For computers to get a leg up on that last swing of the difficulty curve will require some kind of game-changing breakthrough yet to be discovered, I figure. I don't think it's possible with the brute force of machine learning. I've seen machine learning wring some impressively complex stuff out of AI, but only in closed (usually simulated) environments or with limited variables.

I'm relieved if the companies involved have actually been approaching this with more care than was apparent from their promises. Well, PR is PR.
 
Still too early to say, "I told you so," but it's nice to at least see some humility for a change, instead of the usual pie-in-the-sky technophilic conviction that fully-autonomous cars are right around the corner. I think your assessment of the difficulty curve is apt, @Eunos_Cosmo.

I don't understand how the post from @Eunos_Cosmo is humility. Does he work for Tesla or something? Why do we think that curve is apt? Pure conjecture? All of the evidence points the opposite direction, and all of the theory.
 
I don't understand how the post from @Eunos_Cosmo is humility. Does he work for Tesla or something? Why do we think that curve is apt? Pure conjecture? All of the evidence points the opposite direction, and all of the theory.

I think he meant on the part of the executives who cautioned that AV's might not be coming as soon as earlier promised...

I also prefaced my entire post with "I think", which should have clearly indicated conjecture. I don't work in the industry but I know several people who do, as well as several others who work & do research in machine learning. All of them have said that the problem is enormous.
 
I think he meant on the part of the executives who cautioned that AV's might not be coming as soon as earlier promised...

I'm still not seeing that in your post or the one before it. Is it in the video? I see a reference to an unnamed executive in your post, but it's brief. Is that what we're talking about? I feel like I missed a news headline or something.

I also prefaced my entire post with "I think", which should have clearly indicated conjecture. I don't work in the industry but I know several people who do, as well as several others who work & do research in machine learning. All of them have said that the problem is enormous.

Yea, and that's exactly what exponentially growing systems are good at.
 
I'm still not seeing that in your post or the one before it. Is it in the video? I see a reference to an unnamed executive in your post, but it's brief. Is that what we're talking about? I feel like I missed a news headline or something.

I don't know how to help you. :confused:

Yea, and that's exactly what exponentially growing systems are good at.

Applying machine learning to playing chess or detecting lung cancer (where there is effectively a fixed number of variables and strict rules) is a different ball game compared to the very nebulous activity of "driving" (which has a very large amount or even unlimited variables).
 
I don't know how to help you. :confused:

Explain what executives you're talking about and what they said.

Applying machine learning to playing chess or detecting lung cancer (where there is effectively a fixed number of variables and strict rules) is a different ball game compared to the very nebulous activity of "driving" (which has a very large amount or even unlimited variables).

You mean like image recognition? Not just recognizing objects but recognizing the activity in the image (which is done now). It's really not so different, and the systems that already exist are just remarkable in their ability to do it. Also the dataset that is and will be generated to support this behavior is unbelievable. The road has set strict rules as well.

There is absolutely nothing that suggests that AI can't handle this task really efficiently. If you're saying it's going to be 10 years before AI is good at driving in the snow where it can't see the road, or in other low-visibility scenarios, ok sure. I'm with you.
 
Explain what executives you're talking about and what they said.
Did you not see post #32? I included quotes from Ford & Waymo CEOs. I don't know if I would characterize what they said as humility (as @Wolfe did) ...more like hedging.

You mean like image recognition? Not just recognizing objects but recognizing the activity in the image (which is done now). It's really not so different, and the systems that already exist are just remarkable in their ability to do it. Also the dataset that is and will be generated to support this behavior is unbelievable. The road has set strict rules as well.

There is absolutely nothing that suggests that AI can't handle this task really efficiently. If you're saying it's going to be 10 years before AI is good at driving in the snow where it can't see the road, or in other low-visibility scenarios, ok sure. I'm with you.

Sure the data set will be ripe with routine road activity. But how many times will something uncommon have to happen before it is incorporated into the pattern recognition? You're always going to be working with a limited set of data (however large it is) against unlimited circumstances.

I also question how strict the rules of the road actually are from the perspective of system-design. Yes, technically, there are rules of driving, but the real world is not a closed system. The rules are not as predictably/consistently applicable as they are in a 'perfect' game of chess, for instance.
 
Did you not see post #32? I included quotes from Ford & Waymo CEOs. I don't know if I would characterize what they said as humility (as @Wolfe did) ...more like hedging.

No, I missed that. Thanks.

Sure the data set will be ripe with routine road activity. But how many times will something uncommon have to happen before it is incorporated into the pattern recognition?

Not many, and the data set being big, it will see uncommon events often. Far more often than a single human driver could. A dataset of even a million drivers would provide hundreds of lifetimes of driving experience in a single day. You'd see things every day that most drivers would never see. Humans are not as good at driving as we think either. Humans make the same driving mistakes over and over, I see it all the time.

The article says things like "what if a cardboard box blows into the street, can you run over it?" this is an easy job for a computer. That's really not a hard problem. It's actually easier for a computer to do than a human. A tougher problem is an object that can't be identified in the street in front of you, not moving. But I think humans probably make a worse decision in that scenario than a machine would. The human is less likely to see it in time, and more likely to chance it (I would think).

Machines can recognize that the ladder in the truck bed in front of you is not properly secured and back off or change lanes. This will happen while a human driver is playing with their cell phone or changing the radio completely oblivious.

There are so many obvious advantages, and those advantages are so obviously realized by huge data sets, that it is inevitable. And the learning process on these datasets is not linear, it's geometric.

I also question how strict the rules of the road actually are from the perspective of system-design. Yes, technically, there are rules of driving, but the real world is not a closed system. The rules are not as predictably/consistently applicable as they are in a 'perfect' game of chess, for instance.


Yes it's more complicated than chess. But machines already do well in environments that are not rigidly defined and are less predictable than chess. Chess is not the limit of what AI is doing currently, it's far in the rearview mirror.
 
Yes it's more complicated than chess. But machines already do well in environments that are not rigidly defined and are less predictable than chess. Chess is not the limit of what AI is doing currently, it's far in the rearview mirror.

Can you point to some examples? I mean I know autonomous vehicles is one area. But I'm curious about other applications similar to what you describe.
 
Can you point to some examples? I mean I know autonomous vehicles is one area. But I'm curious about other applications similar to what you describe.

https://machinelearningmastery.com/how-to-caption-photos-with-deep-learning/

Take a look at step 2. Describe an image. This is a machine, assessing not just what's in an image, but what's happening in the image. And not even just the things that are happening (in a still image), but the things that human beings consider important that are "happening" in that still image. This is not a rigid environment with carefully crafted rules. You want your machine to amorphously predict what human beings will consider important, and that's not a set thing. This is a very hard problem and a terribly defined one, and AI solves it without blinking an eye.
 
https://machinelearningmastery.com/how-to-caption-photos-with-deep-learning/

Take a look at step 2. Describe an image. This is a machine, assessing not just what's in an image, but what's happening in the image. And not even just the things that are happening (in a still image), but the things that human beings consider important that are "happening" in that still image. This is not a rigid environment with carefully crafted rules. You want your machine to amorphously predict what human beings will consider important, and that's not a set thing. This is a very hard problem and a terribly defined one, and AI solves it without blinking an eye.

This is undoubtedly impressive, but kind of tailored to what deep learning/neural networks are good at. Deep learning is also not infallible:

However, deep neural networks are the same type of image-recognition algorithms that misidentified photos of Black people as gorillas. Laboratory tests reveal that deep neural networks are easily confused by minor changes. Something simple, like putting a sparkly unicorn sticker on a stop sign, can cause the image recognition to fail. Disrupting image recognition would result in a self-driving car failing to stop at a stop sign, which is likely to cause an accident or more pedestrian injuries.

I think there is enough evidence to not be completely convinced self driving cars are coming per the timeline projected just a few years ago. Beyond that is mere speculation, of course.
 
This is undoubtedly impressive, but kind of tailored to what deep learning/neural networks are good at. Deep learning is also not infallible:

Ok, but how concerned do we ultimately need to be about that. Let's say you have a system where putting a unicorn sticker on a stop sign causes a lack of recognition of a stop sign, and the automated car blows through a stop sign and ends up in a crash that wrecks a car (everyone is ok though). Or let's say it blows through a stop sign only to come to an emergency stop in the intersection.

How many times does it misidentify that stop sign? 5? 2? 1? Certainly not 6000. Its ability grows with experience. Even if people can successfully hack it once, or even twice, the network learns from every event. Especially accidents. You're not going to be able to continue to beat it with the same trick. Once it has learned the trick, you never get to use that trick again.
 
Ok, but how concerned do we ultimately need to be about that. Let's say you have a system where putting a unicorn sticker on a stop sign causes a lack of recognition of a stop sign, and the automated car blows through a stop sign and ends up in a crash that wrecks a car (everyone is ok though). Or let's say it blows through a stop sign only to come to an emergency stop in the intersection.

How many times does it misidentify that stop sign? 5? 2? 1? Certainly not 6000. Its ability grows with experience. Even if people can successfully hack it once, or even twice, the network learns from every event. Especially accidents. You're not going to be able to continue to beat it with the same trick. Once it has learned the trick, you never get to use that trick again.

Does it? If a neural network mis-identifies a stop sign, is there a recursive mechanism that reviews and says "ah, yes, upon further review, that actually was a stop sign!" Or does it just ignore the mis-identified stop sign or categorize it as unknown? Reminds me of:

 
Does it? If a neural network mis-identifies a stop sign, is there a recursive mechanism that reviews and says "ah, yes, upon further review, that actually was a stop sign!" Or does it just ignore the mis-identified stop sign or categorize it as unknown?

Yea the system learns as it goes. A misidentification would be a heavily-weighted learning opportunity for the system.
 
Yea the system learns as it goes. A misidentification would be a heavily-weighted learning opportunity for the system.

I understand that, in theory. But how would it even know it was misidentified? I imagine it would see the altered stop sign, and return a result of "not a stop sign" (I'm simplifying) but I don't see what mechanism would trigger it to know that it was actually a misidentified stop sign and not,oh I don't know, swamp gas or whatever.
 
I understand that, in theory. But how would it even know it was misidentified? I imagine it would see the altered stop sign, and return a result of "not a stop sign" (I'm simplifying) but I don't see what mechanism would trigger it to know that it was actually a misidentified stop sign and not,oh I don't know, swamp gas or whatever.

Well it's going to know that something went wrong if it ends up in a collision in an intersection, or even just in an emergency stop setting. Early on, this might get flagged for a human operator to go through and correct the identification. You might think that sounds untenable, but how many of these situations are going to arise?

A mature system that had no human intervention could just approach the same intersection cautiously with the next car and search harder for the stop sign. It could even do a comparison between the footage of the accident (or emergency stop) against the next car to detect if something had been added or was missing in order to identify the piece that tricked it. Machine learning wouldn't necessarily be consciously looking for "what tricked me", but it would certainly see something like "ok now I know that this deviation in image data from what I normally think of as a stop sign is still a stop sign".

A fully mature system could use information from multiple cars at the same intersection and know that a different car had ID'd the stop sign from a different angle and not even get fooled a single time. Or just remember from the previous 16,000,000 cars that went through that there was a stop sign there.
 
Well it's going to know that something went wrong if it ends up in a collision in an intersection, or even just in an emergency stop setting. Early on, this might get flagged for a human operator to go through and correct the identification. You might think that sounds untenable, but how many of these situations are going to arise?

But if nothing goes wrong (also a reasonable outcome, an Uber driver I had recently just calmly blew a redlight without noticing, nothing happened) it will have blown the stop sign it didn't identify without having the capacity to understand it's mistake, which is of no use to the neural net.

A mature system that had no human intervention could just approach the same intersection cautiously with the next car and search harder for the stop sign. It could even do a comparison between the footage of the accident (or emergency stop) against the next car to detect if something had been added or was missing in order to identify the piece that tricked it. Machine learning wouldn't necessarily be consciously looking for "what tricked me", but it would certainly see something like "ok now I know that this deviation in image data from what I normally think of as a stop sign is still a stop sign".

A fully mature system could use information from multiple cars at the same intersection and know that a different car had ID'd the stop sign from a different angle and not even get fooled a single time. Or just remember from the previous 16,000,000 cars that went through that there was a stop sign there.


At the risk of sounding contrarian, that all sounds like it would take a really long time. I know you are using exaggerated numbers (or at least I think you are) to demonstrate the capacity of neural nets to use very large data sets, but I think it also illustrates how potentially unwieldy acquiring that data could be. Yes, after 16 million trips through an intersection, you might have a really good idea of how to safely drive through it. But how long would that take? It's not like scraping images from the internet, it's a physical process. Then consider there are 4 million miles of public roads in the US, and the take rate for vehicles equipped to even collect the data is very small at this point - compounded by the fact that the vehicles that do collect the data are electric and actively avoid large swathes of the US. Teslas, for instance, probably don't frequently travel on rural roads outside major metro areas. The data to train the neural net to become good at driving on those roads (ie, poor signage, poor pavement quality, no striping, etc) will remain limited until they do travel frequently on those roads....but there isn't a compelling reason for that to happen.
 
But if nothing goes wrong (also a reasonable outcome, an Uber driver I had recently just calmly blew a redlight without noticing, nothing happened) it will have blown the stop sign it didn't identify without having the capacity to understand it's mistake, which is of no use to the neural net.

Unless the map that it's building has contrary information from multiple cars, in which case it could easily identify that there was a disconnect and flag an image for training the system even if nothing happened.


At the risk of sounding contrarian, that all sounds like it would take a really long time. I know you are using exaggerated numbers (or at least I think you are) to demonstrate the capacity of neural nets to use very large data sets, but I think it also illustrates how potentially unwieldy acquiring that data could be. Yes, after 16 million trips through an intersection, you might have a really good idea of how to safely drive through it. But how long would that take? It's not like scraping images from the internet, it's a physical process. Then consider there are 4 million miles of public roads in the US, and the take rate for vehicles equipped to even collect the data is very small at this point - compounded by the fact that the vehicles that do collect the data are electric and actively avoid large swathes of the US. Teslas, for instance, probably don't frequently travel on rural roads outside major metro areas. The data to train the neural net to become good at driving on those roads (ie, poor signage, poor pavement quality, no striping, etc) will remain limited until they do travel frequently on those roads....but there isn't a compelling reason for that to happen.

I think that's exactly the rollout we'd want though. For the tougher portions of the country to get tackled last, when the system already had eons of experience to draw from.
 
https://gizmodo.com/ups-has-been-delivering-cargo-in-self-driving-trucks-fo-1837272680

UPS has invested in TuSimple and announced that they've been using semi-autonomous trucks on a 115 mile route between Phoenix and Tucson since May.



Awesome of course. It does make me wonder how hard it would be to throw out a cardboard cutout of a railroad arm and have some flashing red lights in order to commit a robbery. Not that I'd do that.

The other day I was driving and needed to change lanes. There was a model 3 that was a little too close in the other lane for me to get over. And I honestly thought to myself "I could just get over, that car would brake for me". Of course I didn't. But I did think it.
 
Back