Tesla Owners Online Forum banner
101 - 120 of 235 Posts
One aspect required for autonomy is handling degradation, not only of the self driving system itself but of the whole car. It has to be able to deal with every car problem. In serious cases it needs to be able to get itself to as safe a location as possible and call Tesla for service.
This is absolutely true, and the reason why GM has made such a big deal about designing their Gen3 and above production AVs with redundant mechanical and control systems like commercial airliners have.
 
Training effectiveness is not linear in the size of the training dataset. There are diminishing returns as more data is added. It's impossible to say what proportionality constant governs this asymptotic approach, but it may well be that billions of miles are no better than millions.
Let me take a stab at why it may be beneficial for Tesla. Clearly they are not sending all of the raw data from each car from all sensors. And they want to train their net with rich data. They said on the last call they are getting better at collecting and using the data. But in order to encounter a variety of situations you need to be in lots of locations and at various times of day and in various road conditions etc. Its kind of like Waze crowd-sourcing versus Google driving their cars around everywhere. Google was able to drive around everywhere but if every car was a Google Streetview car they would have done it a lot faster.
 
Training effectiveness is not linear in the size of the training dataset. There are diminishing returns as more data is added. It's impossible to say what proportionality constant governs this asymptotic approach, but it may well be that billions of miles are no better than millions.
A more intuitive way to think of this is...

How do you think your driving skills have progressed with miles driven - 1,000 vs. 10,000, vs. 100,000 vs. 1,000,000. Do you think you would be much better than you are now if you could actually drive a billion miles (you can't - it would take almost 4000 years averaging 30 mph, 24/7/365)? How many non-professional drivers have driven more than a million miles? How many new situations do you encounter after having driven a million miles? Why would a machine be any different?
 
A more intuitive way to think of this is...

How do you think your driving skills have progressed with miles driven - 1,000 vs. 10,000, vs. 100,000 vs. 1,000,000. Do you think you would be much better than you are now if you could actually drive a billion miles (you can't - it would take almost 4000 years averaging 30 mph, 24/7/365)? How many non-professional drivers have driven more than a million miles? How many new situations do you encounter after having driven a million miles? Why would a machine be any different?
I don't like that analogy because one of the main problems with humans is they don't pay attention. They get tired. They also drink and do drugs. They get old. They have a limited field of view. They don't practice for emergencies. A machine is different.

Despite driving for 28 years there are many, many things I have never seen or had to deal with. I've been pretty lucky and had some close calls but only one accident while I was driving. The point here is to be better than a human. Do you think there will never be a AI driver that is better than you?
 
I don't like that analogy because one of the main problems with humans is they don't pay attention. They get tired. They also drink and do drugs. They get old. They have a limited field of view. They don't practice for emergencies. A machine is different.

Despite driving for 28 years there are many, many things I have never seen or had to deal with. I've been pretty lucky and had some close calls but only one accident while I was driving. The point here is to be better than a human. Do you think there will never be a AI driver that is better than you?
Each and every comment above manages to miss the point entirely, which has nothing to do with whether a machine can or cannot be a better driver than a human, especially for reasons like getting tired or being impaired in some way or having limited sensory capabilities.

The point is that the pace of learning about a task - in this case the task of driving - is naturally going to decrease after large training datasets, where a million miles is already a very large training dataset. Diminishing returns in training machine learning algorithms (of all kinds) is a well-known, inescapable fact.
 
This is absolutely true, and the reason why GM has made such a big deal about designing their Gen3 and above production AVs with redundant mechanical and control systems like commercial airliners have.
The problem is that the economics of a mass market car is not the same as a commercial airliner. GM can ignore the extra cost because in the short-term they are making a relatively few test vehicles and even after that they are not currently talking about sales to individuals, as I understand it. No matter how wonderfully safe a system might be, it will not save lives if you can't sell it, and you can't sell it if people can't afford it.

If the hardware costs doubled I doubt if Tesla could afford to install it in every car.

If the hardware failure rate is low enough, you may save a lot more lives, even with the failures, with a single system that people can afford than you would with an unaffordable redundant system.

This the sort of tough decision that companies and regulators are going have to face.
 
Each and every comment above manages to miss the point entirely, which has nothing to do with whether a machine can or cannot be a better driver than a human, especially for reasons like getting tired or being impaired in some way or having limited sensory capabilities.

The point is that the pace of learning about a task - in this case the task of driving - is naturally going to decrease after large training datasets, where a million miles is already a very large training dataset. Diminishing returns in training machine learning algorithms (of all kinds) is a well-known, inescapable fact.
I was pointing out why I didn't like the analogy.

So do you think Tesla will stop collecting data for their machine learning because of diminishing returns(they are paying for it) and when do you think that will be?
 
If the hardware costs doubled I doubt if Tesla could afford to install it in every car.
In the short term GM and Tesla are working towards two different problems. GM is working on a geofenced taxi service. Tesla is working on world-wide automated driving. Tesla may be able to use the car owner as a crutch to deliver a solution that has more issues but works everywhere. Tesla Show guys had a good discussion on this. I agree it will be hard on regulators and companies to decide when it's ready.
 
  • Like
Reactions: Guy Weathersby
In the short term GM and Tesla are working towards two different problems. GM is working on a geofenced taxi service. Tesla is working on world-wide automated driving. Tesla may be able to use the car owner as a crutch to deliver a solution that has more issues but works everywhere. Tesla Show guys had a good discussion on this. I agree it will be hard on regulators and companies to decide when it's ready.
Ultimately, it may be insurance companies that may dictate when this technology is ready for prime time. Without insurance...no cars.

Dan
 
Ultimately, it may be insurance companies that may dictate when this technology is ready for prime time. Without insurance...no cars.

Dan
Reminds me of when Leo Laporte said when he talked to his insurance agent about insuring his Model X the agent asked him if he was going to use EAP because they wanted to charge more for that or maybe weren't going to insure him.

Also makes me wonder how it will work if Tesla sends out the FSD firmware. The insurance company won't know if you have it or are using it until they ask. Unless they put something in the policy before the firmware gets pushed out there may be a situation where its ok until your next renewal. Is there anything in the policy now that prohibits using FSD?
 
Reminds me of when Leo Laporte said when he talked to his insurance agent about insuring his Model X the agent asked him if he was going to use EAP because they wanted to charge more for that or maybe weren't going to insure him.

Also makes me wonder how it will work if Tesla sends out the FSD firmware. The insurance company won't know if you have it or are using it until they ask. Unless they put something in the policy before the firmware gets pushed out there may be a situation where its ok until your next renewal. Is there anything in the policy now that prohibits using FSD?
I would think that eventually, insurance companies will love EAP and FSD. When they are many times safer than human drivers and are proven to be reliable in avoiding accidents they will see that this technology is going to save them millions. But...in the mean time it could be slow going winning over the bean counters at State Farm, Allstate and all the rest to accepting the new technology.

Dan
 
I think that this YouTube videovbybthe The Nerdy Engineer is quite good.

One small problem... using human drivers as the sole reference for machine learning algorithms will lead to cars that drive as badly as people. I'm sure we don't want cars that decide not to use their turn signals sometimes because in similar circumstances many people don't.
 
One small problem... using human drivers as the sole reference for machine learning algorithms will lead to cars that drive as badly as people. I'm sure we don't want cars that decide not to use their turn signals sometimes because in similar circumstances many people don't.
I agree people are pretty bad drivers even though everyone thinks they are a good driver. But if we follow this reasoning doesn't it seem plausible that you could get better than humans by removing at least some of the bad behavior from the data set?

I don't know how literal to take Elon's comments but he said in the last investor meeting that he thought it was going to be like Alpha Go. In that case it was first trained by humans to some degree that could beat the best in the world. But then later they built a new version that trained itself. The later was able to easily its previous incarnation as well as the best in the world. So if we follow that logic it doesn't seem like we want human's training it but it was a successful first step.
 
One small problem... using human drivers as the sole reference for machine learning algorithms will lead to cars that drive as badly as people.
I don't believe that most companies are using machine learning to teach a car "how to drive". That behavior is directly being programmed. Machine learning is being used to teach a car how to "recognize situations". Is that thing on the highway a paper bag, or a boulder? Should I just hit it, or avoid it at all costs?
 
"Hi Human, please welcome our new hire: Machine. Machine will be helping you out around here, so if you could be a sport and train Machine on everything you do, that'd be just great."
 
I don't believe that most companies are using machine learning to teach a car "how to drive". That behavior is directly being programmed. Machine learning is being used to teach a car how to "recognize situations". Is that thing on the highway a paper bag, or a boulder? Should I just hit it, or avoid it at all costs?
Yes, I think we agree, semantics aside. "How to drive" is about decision making - that's the heart of what any company (including Tesla) is using machine learning for. "This input defines a situation that requires that response." Any AI, whether based on a mix of supervised and unsupervised (self-play only) learning, can only go so far without putting the machine in the driver's seat, so there is a feedback between its decisions and the decisions of other drivers, because driving is not like a game with hard rules and you can't have a computer "drive against itself" in a realistic way. The idea that this is like AlphaGo or Chess assumes you can have the AI drive in virtual space against other fully realistic simulated drivers, and there are real problems with that on multiple levels. But... I've said all this before in posts that are getting moldy by now. Time to move on and simply wait to see what the future brings from Tesla.
 
The fundamental problem with development using only simulation (setting aside machine vision/perception issues) is that the system (traffic) is extremely non-linear. In real life, as soon as the machine decides what to do and takes some action, that action has an effect on the perceptions and actions of others - and so on. So, without putting the decision making of the car into action with possible errors, precision, latency, etc., the scene changes in a way that cannot be simulated by the time you get a few seconds beyond that initial decision. It's like trying to figure out what the board will look like many moves ahead in a chess game while watching someone else play your side. That kind of learning - watching someone else play while you think about what you would do - only gets you so far.
Isn't the point of Tesla collecting reams of driving data to provide feedback for their AI that's also realistic? For instance, Tesla probably has a significant amount of driving data where the Tesla is in the right lane and the car in front of it jumps on the brakes. As long as they have a sufficiently large library of different scenarios, like...
  • The Tesla brakes and waits for the car to leave the lane.
  • The Tesla is unable to brake in time to avoid hitting the car, but can change lanes to avoid hitting the car.
  • The Tesla can is unable to brake in time to avoid hitting the car and can't change lanes because another car is there.
  • Plus however many other scenarios...
Why wouldn't they be able to stitch together a simulation for their AI that's both dynamic and realistic, and maybe even novel to some degree? For that matter, why wouldn't they be able to train an AI to drive unsafely (speeding, tail-gating, etc...) and also use that as a distinct agent in the simulations?
 
Isn't the point of Tesla collecting reams of driving data to provide feedback for their AI that's also realistic? For instance, Tesla probably has a significant amount of driving data where the Tesla is in the right lane and the car in front of it jumps on the brakes. As long as they have a sufficiently large library of different scenarios, like...
  • The Tesla brakes and waits for the car to leave the lane.
  • The Tesla is unable to brake in time to avoid hitting the car, but can change lanes to avoid hitting the car.
  • The Tesla can is unable to brake in time to avoid hitting the car and can't change lanes because another car is there.
  • Plus however many other scenarios...
Why wouldn't they be able to stitch together a simulation for their AI that's both dynamic and realistic, and maybe even novel to some degree? For that matter, why wouldn't they be able to train an AI to drive unsafely (speeding, tail-gating, etc...) and also use that as a distinct agent in the simulations?
In short, I don't think this approach works for a task without hard rules (if you take the rules of the road literally, drivers violate them almost all the time in completely unpredictable ways). Trying to simulate that behavior is dual to, and as big a problem as, developing the AI in the first place. Let's wait and see...
 
In short, I don't think this approach works for a task without hard rules (if you take the rules of the road literally, drivers violate them almost all the time in completely unpredictable ways). Trying to simulate that behavior is dual to, and as big a problem as, developing the AI in the first place. Let's wait and see...
I guess I don't follow. If Tesla has several billion miles of example data with human drivers behaving in completely unpredictable ways, why wouldn't they use that when training their AI?
 
101 - 120 of 235 Posts