> ODI has identified six Standing General Order ("SGO") reports in which a Tesla vehicle, operating with FSD engaged, approached an intersection with a red traffic signal, continued to travel into the intersection against the red light and was subsequently involved in a crash with other motor vehicles in the intersection. Of these incidents, four crashes resulted in one or more reported injuries. At least some of the incidents appeared to involve FSD proceeding into the intersection after coming to a complete stop.
I've experienced this bug on every FSD 13.x version, including the current 13.2.9. When you're the first to pull up to a red light, the car stops and waits, then after a while it sometimes (maybe 1 time in 100 or so) just decides to go even if the light is still red. Horrifying because sitting at a red light doesn't seem like a dangerous situation, but in fact it might be the most dangerous place on FSD right now. Hopefully this forces them to fix it because my colorful language on the voice feedback apparently hasn't convinced them.
To me as someone who had a Tesla 4 years and used autopilot.. this remains one of the fundamental issues of the Tesla self driving approach.
The errors it makes are inhuman, make no sense to us, which makes them more insidious and unpredictable.
I could get running a light trying to beat it, humans do that. Defensive drivers around you can anticipate this and look for crossing traffic when their light goes green and they floor it immediately.
But a full stop then deciding to make a break for it is bananas.
Not the first baffling inhuman error it has made either.
Yes! This has happened to me once as well. The car slowed down to a rolling stop because it saw a yellow light... and then just as the light turned red, the car took off and broke the red light. WTF! And just as you said, I had mentally switched off in anticipation of a long wait, so I was in absolutely no state to react.
Fortunately there was no accident and there were no cops around or apparently traffic cameras. Somehow I think "Sorry, the car broke the red light, not me!" would not have been a compelling thing to say to a cop or a judge.
The only thing I could do was hit the dashcam record button as some sort of proof, but the video itself has no indication FSD was engaged. I suspect I would have to subpoena or forensically extract any data that could exonerate me, which is just not practical if the worst I got was a ticket.
Well, how are they going to "fix it" when their system is an "end to end neural net"? It's a black box. There's no code to update to tell it to behave differently during red lights.
You can increase the penalty for running a red or increasing the relative sample of red light stops present in the real and synthetic training data. You can use neural net analysis techniques to confirm that features like color and stoplights are being recognized.
You confirm it is fixed by ensuring your validation set also has sufficient data representing the failures (and that they now succeed).
It's not a total black box. Even as the driver you have the speed profiles, follow distance, and speed limit settings. I wouldn't be surprised if they still have deterministic code for how to handle traffic lights.
I think the core problem there is liability. I'm as good or as bad of a driver as I can be, but no matter what I'm responsible for the driving. If I get into one accident a year that's one accident a year.
A self-driving car might be 5x better than me at driving but logically I can't be liable for what it does. The company making it has to be. 5x better would be 0.2 accidents a year. But multiply by that the 100,000 cars the manufacturer has sold... they don't want that liability. That's why Telsa's autopilot is still supervised, because they want its mistakes to be your problem.
It presents a lot of thorny problems. If I am a persistently dangerous driver I can have my license taken away and be taken off the road. But if a self driving car is judged to be too dangerous for the road you'll suddenly have thousands of people who lose access to their car (assuming a future with self-driving only cars) through no fault of their own. What's their path to getting back on the road?
Your liability is covered by your insurance company. And it costs you on the order of $1000 a year for that privilege.
If the self-driving car company takes on that liability it'll save you the $1000/year. So assume they're either going to charge you an extra $10K up front or an extra $1000/year. For that kind of cash they should be quite willing to take on the risk or they can find an insurance company to do so, if their car is actually safer than an average driver.
This should work in most countries. Perhaps not the US with its pattern of massive punitive damage awards.
Right, but the companies still aren't going to if they don't have to. Otherwise Tesla would be doing it today.
OP said:
> self driving will be statistically significantly better than human drivers, but because it isn't perfect we won't allow it anyway.
My contention is that it's not that everyone is a luddite, it's that while companies are legally allowed to provide quasi-self driving they have no liability for they will do exactly that. And that is what will hold us back.
It can't just be better than the average human driver. it has to be some like 10x as good as the average human driver or on par with a race driver.
Everyone thinks they're above average, even people who know statistics! So if it's merely 20% better than the average driver a huge number of people will conclude "I am above average so I'll do a better job"
Will some of them be wrong? Of course. But tons of them will be right, too.
It can't be statistically significantly better, it has to be statistically overwhelmingly better. Not a part of a standard deviation but several of them.
Those stastics need to be figured out. You are on the right track - there are a few really bad drivers (and I'm not sure if race drivers are better on common roads - anyone have data?). We need to work through those issues. I didn't say average though I said better than humans and left the measurements open because I don't know all the issues to account for.
Correct
The averages include teenage males, elderly, driving while texting, road ragers, people who drive late&tired and DUI enjoyers. Accident rates are extremely fat tailed.
If you aren’t in one of those categories you are immediately dramatically better than average. This is fairly easy to do before even considering being a “good driver” / defensive driver / etc.
At the moment it seems much more likely it will be significantly worse, statistically speaking, but because of massive lobbying it will be allowed anyway.
Lidar has problems too. It works quite well if there is a limited number of cars that use it too. Of course if all of those were Waymos you could synchronize sensors to not interfere with each other.
Of course if that fails, perhaps they could use honking as backup sensor.
This is a major flaw in decision making that some people have:
A new solution that has less problems is worse than an existing solution with more problems.
There's also a willingness to be less upset with humans making a mistake than a machine.
Edit:
Unknown problems may or may not exist so while I think that concern makes sense it doesn't matter until they come up.
I'm making the decision based on the current state. If additional issues come about then you reevaluate if the new solution is better or worse than the existing.
If you consider unknown problems then how can you make a decision?
X+Y > Z?
Where X is the weight of problems for the new solution, Z is the weigh for the existing solution, and Y is a value between 0 and infinity (unknown problems)
I am not elderly, do not DUI, drive tired, or touch my phone while driving. I put myself in a different risk pool by my behavior.
I step into a Tesla FSD car, I am in the same risk pool as everyone else in one when it decides to run 1% of red lights or whatever other stupid bug in the next release.
> There's also a willingness to be less upset humans making a mistake than a machine.
There's willingness to be upset at anyone with deep pockets who can be found accountable. And the motivations for that aren't emotional, they are purely material.
There's a reason why people have spent decades trying to find pharmacological cause for autism, in spite of the enormous amount of evidence that the condition is mostly hereditary.
And a very good reason why vaccines in America are exempt from the legal system.
That spent fuel is viable fuel for a different type of reactor. If I'm not mistaken, those reactors are forbidden in the US. They could be used elsewhere though.
Reprocessing or storing 100 or better yet 1,000 year old fuel is way more cost effective, so it may be a net benefit to keeping it above ground to decay.
Risk doesn’t exist in a vacuum. Current levels of risk mitigation makes nuclear uncompetitive without large subsidies. Arguing to make nuclear less safe is difficult.
Self driving has a similar issue where the value shrinks the more supervision it requires. Tesla is a new benefit in terms of effort but it can’t operate safely while the driver is asleep.
I think that's by broad policy and not by individual risk mitigation. Isn't it something like "if nuclear is cheaper than the average then it has to spend the difference on risk mitigation"?
Not really what I’m talking about. There’s quite a lot spent to avoid known failures and little way to know what the minimum they can get away with.
3 mile island wasn’t a public health hazard but lack of maintenance cost billions by destroying the reactor. Thus prompting the industry to spend significantly more money on maintaining reactors. The problem is it’s really difficult to determine what’s overkill here.
There’s something like 600,000 US bridges and sometimes people look a failure and say it’s rare enough not to be worth doing anything about.
Something like the trolley problem is at work here, but you're the one tied up on the tracks.
Suppose the accident rate for regular cars were 1 fatality every 100 million miles driven (it actually is in the US).
Suppose further a hypothetical self-driving car has a proven rate of 1 fatality every 1 billion miles (10x better). Except when that fatality happens, it is because the car suddenly incinerates when arriving otherwise safely at its destination. Something about the advanced AI technology makes this outcome completely random and completely unfixable.
Which do you choose? Drive yourself, 10x more dangerous? Or leave it entirely up to chance, but 10x safer?
The rational choice is to pick the self-driving car. Yet I suspect many people (including me, I admit) would choose to drive themselves.
How far apart do those numbers need to be before most people give up the steering wheel?
An example of this effect can be already be found in motorcycles. I currently own a BMW motorcycle and a honda truck. The honda has all the modern driver aids, automatic braking radar, lane keeping, etc. It has many airbags and is statistically about 30x safer per mile then the motorcycle. The truck is far easier to drive. I still ride the motorcycle whenever I can. Why? Because the motorcycle forces me to become more fully human, and the truck turns me into more of a machine. On a motorcycle you smell the hay as you pass a field. You feel the cool air as you ride over the stream. Every tiny bump and crack in the pavement has an effect, and you feel them all. You are not in a car, you are in the world. You must PAY FULL ATTENTION to the here and now or you will get squished. A motorcycle forces you to BE HERE NOW.
Our mental suffering is not because car is on autopilot. Suffering happens because WE ARE ON AUTOPILOT. So I chose to trade the 30x risk of death for a 30x reduction in mental suffering. Rational? God I hope not.
It’s especially unfortunate because these cars are not even “full self driving.” It’s just a lie that Musk has gotten away with because there are so many Tesla stakeholders now
Nuclear is probably a difficult comparison, but in both cases it would indeed be about liability. Would the owner have liability? That would increase insurance by a lot, probably significantly make autopilot much more expensive to have.
Is the manufacturer liable? Autopilot would be too much risk and the manufacturer would demand users can only activate it behind the wheel, needs both hands on the wheel while getting a coffee infusion. The tool would lose its advantages.
Power plants aren't insurable because it would financially destroy any company in case of a leak or operating costs would become so high, that nuclear cannot compete anymore.
We maybe will get it one day. Waymo probably did it correctly. Limited road network, careful approach, learn what the problems are and expand on that.
The manufacturers will take liability. Mercedes-Benz is already doing this with their Drive Pilot level 3 autonomous vehicles. Coverage is limited but will expand.
>DRIVE PILOT can be activated in heavy traffic jams at a speed of 40 MPH or less on a pre-defined freeway network approved by Mercedes-Benz. DRIVE PILOT operates in daytime lighting conditions when inclement weather is not present and in areas where there is not a construction zone. Please refer to the Operator’s Manual for a full list of conditions required for DRIVE PILOT.
Alone long and easy drivable roads like highways or german autobahns would have a huge benefit and simple to automate.
You woound only need local people to grab the truck at a parking spot close by to drive them to the target location.
That alone would help long road truckers to see their familys and not having to sleep in their trucks. It would save costs and would make it saver for everyone if all the trucks drive automatically.
BMW and other EV developers can already drive on a lot of german autobahn hands free.
What i also don't understand, if i really want the benefit of self driving car, I only need it when i'm driving long or when i'm intoxicated. Tbh. let me just record the road from bar to my home, let me drive it for a few times until my car knows that direction and done.
> I think it's very possible that we won't get self-driving cars because...
We already have self-driving cars: look at Waymo, etc. look at chinese ride-hailing companies. What we won't have is private-use self-driving cars: a regular person will not be able to buy one.
Of course we will have private-use self-driving cars. Auto manufacturers will get that technology one way or another, either by developing it themselves or licensing it from others. If there's consumer demand then they'll sell it: Mercedes-Benz is already selling level 3 autonomous cars to consumers. Most regular people prefer to own (or at least lease) their own private cars so that they can go wherever they want whenever they want and keep some of their stuff inside.
In which case, it would be largely uninteresting for many of us.
I rarely take an Uber or a taxi (probably single digit number of times a year) and, even if it were half the price, that would be unlikely to change my behavior much.
But waymo does not operate nearly at the same degree as what Tesla FSD aspires to (anywhere, anytime).
While a good amount of functionality exists, the liability model and accidents are big road blocks to seeing this technology truly mainstream, not just select cities/routes/etc
> But waymo does not operate nearly at the same degree as what Tesla FSD aspires to (anywhere, anytime).
I aspire to be a trillionaire. Does that count for anything?
> While a good amount of functionality exists, the liability model and accidents are big road blocks to seeing this technology truly mainstream, not just select cities/routes/etc
Waymo just started service at SFO airport last month.
What’s your definition of mainstream? Everywhere anytime like an Uber?
One think that concerns me with Tesla FSD (and I use it every day in my Plaid) is the transition between FSD on and off. Sometimes I catch myself forgetting to steer or slow down because I just switched it off to get off an exit early, etc. and my mind hasn’t switched modes along with it.
typical US lagging system, most of the reports are 2-3 generations back on hardware and software, Sure you can open investigation, but isn't that too late ?
p.s. i've been a long sceptic of FSD from tesla, but latest changes, really really shows huge progress, even 1 year away and now these are two different worlds.
By progress, do you mean things like they can now disable FSD even faster before an accident to avoid liability? Maybe they are better at obscuring the crash data now? or just blackhole it?
I'd like to believe that this is done in the interest of driver safety. However unlike past administrations, where most departments tended to stay in their lane (no pun intended), everything these days seems driven from the top, with punishment and retribution often the goal. I can't help but think this is an opportunity to get back at Musk for his post on X saying that Trump was in the Epstein files (the post was deleted, and Musk issued a hand-wavy apology, but he never retracted the statement)
Tesla is shady as hell though; they intentionally market this technology as something that it is not. The idea that people will stay engaged in the driving task when they are not actually doing it is absurd. Either cars should be level 2 or level 4/5; level 3 is a completely broken idea and Tesla isn't even actively pursuing certifications beyond it. They want to pretend its a level 2 system and not their problem. We don't let people sell cars that are known to explode on contact, and we shouldn't allow people to sell cars that use a completely discredited automation model.
Level 2-3 is a just a buzzword that doesn't mean anything. In reality the car is safe enough to drive in the wild or not. It's a binary state, the rest is just a regulatory patchwork.
It is not just a buzzword, but I don't care what words you want to use. I think my comment is very clearly critiquing the set of capabilities in level 3, where the car can do almost all the driving in many conditions but relies on a human to respond in a few seconds when it disengages. That is a broken idea that has been thoroughly discredited.
Regulatory agencies are always going to be behind for a long time when a new comes along. Regulations still have not caught up to social media. The unfortunate part is that by the time regulations are obviously needed, the company that would be regulated has grown to some massive behemoth with lots of money to spread around the regulators to ensure that regulations come in to not stop them but make it difficult for anyone to follow as a competitor.
From the NHTSA release [1]:
> ODI has identified six Standing General Order ("SGO") reports in which a Tesla vehicle, operating with FSD engaged, approached an intersection with a red traffic signal, continued to travel into the intersection against the red light and was subsequently involved in a crash with other motor vehicles in the intersection. Of these incidents, four crashes resulted in one or more reported injuries. At least some of the incidents appeared to involve FSD proceeding into the intersection after coming to a complete stop.
I've experienced this bug on every FSD 13.x version, including the current 13.2.9. When you're the first to pull up to a red light, the car stops and waits, then after a while it sometimes (maybe 1 time in 100 or so) just decides to go even if the light is still red. Horrifying because sitting at a red light doesn't seem like a dangerous situation, but in fact it might be the most dangerous place on FSD right now. Hopefully this forces them to fix it because my colorful language on the voice feedback apparently hasn't convinced them.
[1] https://www.nhtsa.gov/?nhtsaId=PE25012
To me as someone who had a Tesla 4 years and used autopilot.. this remains one of the fundamental issues of the Tesla self driving approach.
The errors it makes are inhuman, make no sense to us, which makes them more insidious and unpredictable.
I could get running a light trying to beat it, humans do that. Defensive drivers around you can anticipate this and look for crossing traffic when their light goes green and they floor it immediately.
But a full stop then deciding to make a break for it is bananas.
Not the first baffling inhuman error it has made either.
Yes! This has happened to me once as well. The car slowed down to a rolling stop because it saw a yellow light... and then just as the light turned red, the car took off and broke the red light. WTF! And just as you said, I had mentally switched off in anticipation of a long wait, so I was in absolutely no state to react.
Fortunately there was no accident and there were no cops around or apparently traffic cameras. Somehow I think "Sorry, the car broke the red light, not me!" would not have been a compelling thing to say to a cop or a judge.
The only thing I could do was hit the dashcam record button as some sort of proof, but the video itself has no indication FSD was engaged. I suspect I would have to subpoena or forensically extract any data that could exonerate me, which is just not practical if the worst I got was a ticket.
Well, how are they going to "fix it" when their system is an "end to end neural net"? It's a black box. There's no code to update to tell it to behave differently during red lights.
You can increase the penalty for running a red or increasing the relative sample of red light stops present in the real and synthetic training data. You can use neural net analysis techniques to confirm that features like color and stoplights are being recognized.
You confirm it is fixed by ensuring your validation set also has sufficient data representing the failures (and that they now succeed).
I don't think anyone seriously believes it's an end-to-end neural network. The person who said that has somewhat of a credibility problem.
I am someone and I believe it's an end-to-end neural net.
It's not a total black box. Even as the driver you have the speed profiles, follow distance, and speed limit settings. I wouldn't be surprised if they still have deterministic code for how to handle traffic lights.
It's terrifying to me that Tesla and you can beta-test this potentially fatal software on the rest of us.
It should be illegal by default until the OEM proves it's safe with paid safety drivers.
Legal by default is an insane state of affairs.
Elon has already switched the narrative to
"Optimus robot is the future of Tesla"
He knows shareholders value your company far more when it's their dreams guiding valuation, rather than what exists in reality.
I think it's very possible that we won't get self-driving cars because, similar to nuclear energy, we'll decide that the risks aren't worth it.
But it'll be based on risks introduced by preventable human error- hubris, etc.
All it will take is some viral video of a Tesla running over a child or something terrible like that.
What I fear is that self driving will be statistically significantly better than human drivers, but because it isn't perfect we won't allow it anyway.
I think the core problem there is liability. I'm as good or as bad of a driver as I can be, but no matter what I'm responsible for the driving. If I get into one accident a year that's one accident a year.
A self-driving car might be 5x better than me at driving but logically I can't be liable for what it does. The company making it has to be. 5x better would be 0.2 accidents a year. But multiply by that the 100,000 cars the manufacturer has sold... they don't want that liability. That's why Telsa's autopilot is still supervised, because they want its mistakes to be your problem.
It presents a lot of thorny problems. If I am a persistently dangerous driver I can have my license taken away and be taken off the road. But if a self driving car is judged to be too dangerous for the road you'll suddenly have thousands of people who lose access to their car (assuming a future with self-driving only cars) through no fault of their own. What's their path to getting back on the road?
Your liability is covered by your insurance company. And it costs you on the order of $1000 a year for that privilege.
If the self-driving car company takes on that liability it'll save you the $1000/year. So assume they're either going to charge you an extra $10K up front or an extra $1000/year. For that kind of cash they should be quite willing to take on the risk or they can find an insurance company to do so, if their car is actually safer than an average driver.
This should work in most countries. Perhaps not the US with its pattern of massive punitive damage awards.
Right, but the companies still aren't going to if they don't have to. Otherwise Tesla would be doing it today.
OP said:
> self driving will be statistically significantly better than human drivers, but because it isn't perfect we won't allow it anyway.
My contention is that it's not that everyone is a luddite, it's that while companies are legally allowed to provide quasi-self driving they have no liability for they will do exactly that. And that is what will hold us back.
1. Tesla has been found liable in one case, there will be many more. https://www.nbcnews.com/news/us-news/tesla-autopilot-crash-t...
2. There's $1000/year of potential revenue they're missing out on by not assuming liability. That's a pretty powerful incentive.
Didn't Mercedes-Benz begin assuming liability a couple years ago for accidents that happen while Level 3 self-driving is engaged?
I wonder if that's still the case, and if so how many accidents they've become liable for.
It can't just be better than the average human driver. it has to be some like 10x as good as the average human driver or on par with a race driver.
Everyone thinks they're above average, even people who know statistics! So if it's merely 20% better than the average driver a huge number of people will conclude "I am above average so I'll do a better job"
Will some of them be wrong? Of course. But tons of them will be right, too.
It can't be statistically significantly better, it has to be statistically overwhelmingly better. Not a part of a standard deviation but several of them.
A better way to phrase it is it needs to be better than the average Uber driver. People have a less inflated sense of that standard.
Those stastics need to be figured out. You are on the right track - there are a few really bad drivers (and I'm not sure if race drivers are better on common roads - anyone have data?). We need to work through those issues. I didn't say average though I said better than humans and left the measurements open because I don't know all the issues to account for.
Correct The averages include teenage males, elderly, driving while texting, road ragers, people who drive late&tired and DUI enjoyers. Accident rates are extremely fat tailed.
If you aren’t in one of those categories you are immediately dramatically better than average. This is fairly easy to do before even considering being a “good driver” / defensive driver / etc.
So you want a better comparison for most people.
At the moment it seems much more likely it will be significantly worse, statistically speaking, but because of massive lobbying it will be allowed anyway.
It is statistically safer for cars using LIDAR, like Waymo. Tesla’s system is sleeker, but involves far more human risk.
We don’t know that. All of the players, including Waymo give out only partial data that makes a sound analysis impossible.
Lidar has problems too. It works quite well if there is a limited number of cars that use it too. Of course if all of those were Waymos you could synchronize sensors to not interfere with each other.
Of course if that fails, perhaps they could use honking as backup sensor.
Lidar also destroys camera sensors. That might become a huge problem in the future.
Optical filters have been around for awhile, including anti-lasers for cameras, though only for a few years.
This is a major flaw in decision making that some people have:
A new solution that has less problems is worse than an existing solution with more problems.
There's also a willingness to be less upset with humans making a mistake than a machine.
Edit:
Unknown problems may or may not exist so while I think that concern makes sense it doesn't matter until they come up.
I'm making the decision based on the current state. If additional issues come about then you reevaluate if the new solution is better or worse than the existing.
If you consider unknown problems then how can you make a decision?
X+Y > Z?
Where X is the weight of problems for the new solution, Z is the weigh for the existing solution, and Y is a value between 0 and infinity (unknown problems)
We don’t know that it has less problems.
The distribution of errors is as important as the error rate.
Simply being sober, awake, calm and not texting puts you far above the average driver and are things you can control.
A self diving car which, apparently 10+ years into development is currently running red lights randomly 1% of the time is not in your control.
>A self diving car which, apparently 10+ years into development is currently running red lights randomly 1% of the time is not in your control.
It can be controlled however.
>Simply being sober, awake, calm and not texting puts you far above the average driver and are things you can control.
It's not important if it's possible, what matters if it happens.
It's conditional probabilities.
I am not elderly, do not DUI, drive tired, or touch my phone while driving. I put myself in a different risk pool by my behavior.
I step into a Tesla FSD car, I am in the same risk pool as everyone else in one when it decides to run 1% of red lights or whatever other stupid bug in the next release.
> A new solution that has less problems is worse than an existing solution with more problems
Don't disagree, but new solutions can come with unknowns
It's not an issue of having more or having less problems. The issue is having unknown, unfamiliar problems.
> There's also a willingness to be less upset humans making a mistake than a machine.
There's willingness to be upset at anyone with deep pockets who can be found accountable. And the motivations for that aren't emotional, they are purely material.
There's a reason why people have spent decades trying to find pharmacological cause for autism, in spite of the enormous amount of evidence that the condition is mostly hereditary.
And a very good reason why vaccines in America are exempt from the legal system.
> similar to nuclear energy, we'll decide that the risks aren't worth it.
Except, in both cases the risk, statistically, is clearly worth it.
It is the optics that suck.
But humans are easily influenced by perception and narrative, rather than rationality.
You're talking about the risk of accidents, what about long term storage of spent fuel?
There's still no final storage in all of the US, so there's that.
That spent fuel is viable fuel for a different type of reactor. If I'm not mistaken, those reactors are forbidden in the US. They could be used elsewhere though.
Reprocessing or storing 100 or better yet 1,000 year old fuel is way more cost effective, so it may be a net benefit to keeping it above ground to decay.
> You're talking about the risk of accidents, what about long term storage of spent fuel?
Thinking total risk, end-to-end, including reduction of risks associated with other technologies.
Risk doesn’t exist in a vacuum. Current levels of risk mitigation makes nuclear uncompetitive without large subsidies. Arguing to make nuclear less safe is difficult.
Self driving has a similar issue where the value shrinks the more supervision it requires. Tesla is a new benefit in terms of effort but it can’t operate safely while the driver is asleep.
> Current levels of risk mitigation
I think that's by broad policy and not by individual risk mitigation. Isn't it something like "if nuclear is cheaper than the average then it has to spend the difference on risk mitigation"?
Not really what I’m talking about. There’s quite a lot spent to avoid known failures and little way to know what the minimum they can get away with.
3 mile island wasn’t a public health hazard but lack of maintenance cost billions by destroying the reactor. Thus prompting the industry to spend significantly more money on maintaining reactors. The problem is it’s really difficult to determine what’s overkill here.
There’s something like 600,000 US bridges and sometimes people look a failure and say it’s rare enough not to be worth doing anything about.
Something like the trolley problem is at work here, but you're the one tied up on the tracks.
Suppose the accident rate for regular cars were 1 fatality every 100 million miles driven (it actually is in the US).
Suppose further a hypothetical self-driving car has a proven rate of 1 fatality every 1 billion miles (10x better). Except when that fatality happens, it is because the car suddenly incinerates when arriving otherwise safely at its destination. Something about the advanced AI technology makes this outcome completely random and completely unfixable.
Which do you choose? Drive yourself, 10x more dangerous? Or leave it entirely up to chance, but 10x safer?
The rational choice is to pick the self-driving car. Yet I suspect many people (including me, I admit) would choose to drive themselves.
How far apart do those numbers need to be before most people give up the steering wheel?
An example of this effect can be already be found in motorcycles. I currently own a BMW motorcycle and a honda truck. The honda has all the modern driver aids, automatic braking radar, lane keeping, etc. It has many airbags and is statistically about 30x safer per mile then the motorcycle. The truck is far easier to drive. I still ride the motorcycle whenever I can. Why? Because the motorcycle forces me to become more fully human, and the truck turns me into more of a machine. On a motorcycle you smell the hay as you pass a field. You feel the cool air as you ride over the stream. Every tiny bump and crack in the pavement has an effect, and you feel them all. You are not in a car, you are in the world. You must PAY FULL ATTENTION to the here and now or you will get squished. A motorcycle forces you to BE HERE NOW.
Our mental suffering is not because car is on autopilot. Suffering happens because WE ARE ON AUTOPILOT. So I chose to trade the 30x risk of death for a 30x reduction in mental suffering. Rational? God I hope not.
It’s especially unfortunate because these cars are not even “full self driving.” It’s just a lie that Musk has gotten away with because there are so many Tesla stakeholders now
Nuclear is probably a difficult comparison, but in both cases it would indeed be about liability. Would the owner have liability? That would increase insurance by a lot, probably significantly make autopilot much more expensive to have.
Is the manufacturer liable? Autopilot would be too much risk and the manufacturer would demand users can only activate it behind the wheel, needs both hands on the wheel while getting a coffee infusion. The tool would lose its advantages.
Power plants aren't insurable because it would financially destroy any company in case of a leak or operating costs would become so high, that nuclear cannot compete anymore.
We maybe will get it one day. Waymo probably did it correctly. Limited road network, careful approach, learn what the problems are and expand on that.
The manufacturers will take liability. Mercedes-Benz is already doing this with their Drive Pilot level 3 autonomous vehicles. Coverage is limited but will expand.
https://www.mbusa.com/en/owners/manuals/drive-pilot
Yeah, MB is really going out on a limb here...
>DRIVE PILOT can be activated in heavy traffic jams at a speed of 40 MPH or less on a pre-defined freeway network approved by Mercedes-Benz. DRIVE PILOT operates in daytime lighting conditions when inclement weather is not present and in areas where there is not a construction zone. Please refer to the Operator’s Manual for a full list of conditions required for DRIVE PILOT.
Would you trust Tesla’s management if they said they assumed liability?
Either there is a legal agreement in place or there isn't: trust doesn't enter into it.
Trust does matter. If Tesla is determined to use every possible trick to avoid liability, it will cost you a lot to get advantages of that agreement.
Meanwhile, if you have contract with more serious company, you wont have to spend years and thousands fighting them over liability.
Since humans can't really sit in the seat and take over in an emergency reliably I think you have to go straight to level 5 no steering wheel models.
No one is going to regulate it this presidential term though so Tesla has some more time to work I guess.
I think we'll still get self driving cars. Waymo doesn't have this problem.
Alone long and easy drivable roads like highways or german autobahns would have a huge benefit and simple to automate.
You woound only need local people to grab the truck at a parking spot close by to drive them to the target location.
That alone would help long road truckers to see their familys and not having to sleep in their trucks. It would save costs and would make it saver for everyone if all the trucks drive automatically.
BMW and other EV developers can already drive on a lot of german autobahn hands free.
What i also don't understand, if i really want the benefit of self driving car, I only need it when i'm driving long or when i'm intoxicated. Tbh. let me just record the road from bar to my home, let me drive it for a few times until my car knows that direction and done.
> I think it's very possible that we won't get self-driving cars because...
We already have self-driving cars: look at Waymo, etc. look at chinese ride-hailing companies. What we won't have is private-use self-driving cars: a regular person will not be able to buy one.
Of course we will have private-use self-driving cars. Auto manufacturers will get that technology one way or another, either by developing it themselves or licensing it from others. If there's consumer demand then they'll sell it: Mercedes-Benz is already selling level 3 autonomous cars to consumers. Most regular people prefer to own (or at least lease) their own private cars so that they can go wherever they want whenever they want and keep some of their stuff inside.
In which case, it would be largely uninteresting for many of us.
I rarely take an Uber or a taxi (probably single digit number of times a year) and, even if it were half the price, that would be unlikely to change my behavior much.
You are thinking from a western mindset, my world is the center of the world.
That can change consumer behavior around you dramatically , for example cut car ownership ?
What I mean is that they'll be banned because people are dying / some viral incident causes public sentiment to turn against the technology.
But waymo does not operate nearly at the same degree as what Tesla FSD aspires to (anywhere, anytime).
While a good amount of functionality exists, the liability model and accidents are big road blocks to seeing this technology truly mainstream, not just select cities/routes/etc
> But waymo does not operate nearly at the same degree as what Tesla FSD aspires to (anywhere, anytime).
I aspire to be a trillionaire. Does that count for anything?
> While a good amount of functionality exists, the liability model and accidents are big road blocks to seeing this technology truly mainstream, not just select cities/routes/etc
Waymo just started service at SFO airport last month.
What’s your definition of mainstream? Everywhere anytime like an Uber?
Except, we have self-driving cars. Waymos are moving people around in San Francisco all day every day, and they are all over the place.
Personal self-driving cars? Maybe less so because we probably want them to be well maintained.
Tesla-style, camera-only, dual-use (human and computer driven), safety-as-an-afterthought cars? Probably not.
Let me guess...the stock went up?
Generously putting the broader "full self driving" discussion to one side....
Moving from mixed hardware to camera-only is only ever likely to result in articles such as the one linked to being written.
No amount of AI bullshit is going to save you from the brick wall that the camera can't see because there is fog etc. in the way.
One think that concerns me with Tesla FSD (and I use it every day in my Plaid) is the transition between FSD on and off. Sometimes I catch myself forgetting to steer or slow down because I just switched it off to get off an exit early, etc. and my mind hasn’t switched modes along with it.
typical US lagging system, most of the reports are 2-3 generations back on hardware and software, Sure you can open investigation, but isn't that too late ?
p.s. i've been a long sceptic of FSD from tesla, but latest changes, really really shows huge progress, even 1 year away and now these are two different worlds.
By progress, do you mean things like they can now disable FSD even faster before an accident to avoid liability? Maybe they are better at obscuring the crash data now? or just blackhole it?
I'd like to believe that this is done in the interest of driver safety. However unlike past administrations, where most departments tended to stay in their lane (no pun intended), everything these days seems driven from the top, with punishment and retribution often the goal. I can't help but think this is an opportunity to get back at Musk for his post on X saying that Trump was in the Epstein files (the post was deleted, and Musk issued a hand-wavy apology, but he never retracted the statement)
Tesla is shady as hell though; they intentionally market this technology as something that it is not. The idea that people will stay engaged in the driving task when they are not actually doing it is absurd. Either cars should be level 2 or level 4/5; level 3 is a completely broken idea and Tesla isn't even actively pursuing certifications beyond it. They want to pretend its a level 2 system and not their problem. We don't let people sell cars that are known to explode on contact, and we shouldn't allow people to sell cars that use a completely discredited automation model.
Just six more months though...
Level 2-3 is a just a buzzword that doesn't mean anything. In reality the car is safe enough to drive in the wild or not. It's a binary state, the rest is just a regulatory patchwork.
It is not just a buzzword, but I don't care what words you want to use. I think my comment is very clearly critiquing the set of capabilities in level 3, where the car can do almost all the driving in many conditions but relies on a human to respond in a few seconds when it disengages. That is a broken idea that has been thoroughly discredited.
[flagged]
Why didn't they call it more accurately Supervised Self-Driving which is based on the long term used by them.
The marketing name FSD or Full Self-Driving with (Supervised) in small font and brackets is incredibly misleading.
They didn't because no-one forced them to, and why not oversell your product when no-one is stopping you from doing it?
Regulatory agencies have been toothless towards Tesla for a long time.
Regulatory agencies are always going to be behind for a long time when a new comes along. Regulations still have not caught up to social media. The unfortunate part is that by the time regulations are obviously needed, the company that would be regulated has grown to some massive behemoth with lots of money to spread around the regulators to ensure that regulations come in to not stop them but make it difficult for anyone to follow as a competitor.