hckrnws
Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans
by Bender
It is important to note that this is with safety drivers. Professional driver + their most advanced "Robotaxi" FSD version under test with careful scrutiny is 4x worse than the average non-professional driver alone and averaging 57,000 miles per minor collision.
Yet it is quite odd how Tesla also reports that untrained customers using old versions of FSD with outdated hardware average 1,500,000 miles per minor collision [1], a literal 3000% difference, when there are no penalties for incorrect reporting.
Robotaxi supevision is just an emergency brake switch.
Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Nah the relevant factor, which has been obvious to anyone who cared to think about this stuff honestly for years, is that Tesla's safety claims on FSD are meaningless.
Accident rates under traditional cruise control are also extremely below average.
Why?
Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!
Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.
Yup, after getting a Tesla with a free FSD trial period, it was obviously a death trap if used in any kind of slightly complex situation (like the highway on-ramp that was under construction for a year).
At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.
The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.
If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.
Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.
> the same amount of attention as supervising a teenager in the first week of their learning permit.
Yikes! I’d be a nervous wreck after just a couple of days.
You learn when it’s good and bad. It definitely has a “personality”. It is awesome in certain situations, like bumper to bumper traffic.
I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.
> It definitely has a “personality”.
You mean it has obvious bugs.
The most basic adaptive cruise control is "awesome" in bumper to bumper traffic.
For some reason many manufacturers intentionally disable it at low speeds or won’t let it restart from stop. Super annoying and entirely unnecessary
I understand not restarting from a stop unprompted. There are simply too many situations on the road where automatically moving from a stop may be undesirable in case the driver isn't paying attention. Stop signs, four way stops, yield situations, probably more. Safer overall to make it an intentioned action by the driver.
Also, if it actually worked, Tesla's marketing would literally never shut up about it because they have a working fully self-driving car. That would be the first, second, and third bullet point in all their marketing, and they would be right to do that. It's an incredible feature differentiator from all their competition.
The only problem is, it doesn't work.
More importantly, we would have independent researchers looking at the data and commenting. I know this data exists, but I've never seen anyone who has the data and ability to understand it who doesn't also have a conflict of interest.
If it actually worked, Tesla would include an indemnity clause for all accidents while it’s active.
> Robotaxi supevision is just an emergency brake switch
That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.
Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.
Interesting.
I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.
Humans are extremely bad at vigilance when nothing interesting is happening. Lookout is a life critical role on the railways you might be assigned as a track worker where your whole job is to watch for railway trains and alert your co-workers when one is coming, so they retreat to a safe position while it passes. That seems easy, and these are typically close friends, you work with them every day rotating roles, you'd certainly not want them injured or killed - but it turns out it's basically impossible to stay vigilant for more than an hour or two tops. Having insisted that you aren't tired, since you're just stood somewhere watching while your mates are working hard on the track, you nevertheless lose focus and oops, a train passes without your conscious awareness and your colleague dies or has a life-changing injury.
This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.
> Humans are extremely bad at vigilance when nothing interesting is happening
It would be interesting to try training a non-human animal for this. It would probably not work for learning things like rules of the road, but it might work for collision avoidance.
I know of at least two relevant experiments that suggest it might be possible.
1. During WWII when the US was willing to considered nearly anything that might win the war (short of totally insane occult or crackpot theories that the Nazis wasted money on) they sponsored a project by B.F. Skinner to investigate using pigeons to guide bombs.
Skinner was able to train pigeons to look at an image projected on a screen that showed multiple boats, a mix of US and Japanese boats, and move their heads in a harness that would steer a falling bomb to a Japanese boat. They never actually deployed this, but they had tests in a simulator and the pigeons did a great job.
2. I can't give a cite for this one, because I read it in a textbook over 40 years ago. A researcher trained pigeons to watch some parts coming off an assembly line, and if they had any visible defects peck a switch.
There were a couple really clever things about this. To train an animal to do this you have to initially frequently reward them when they are right. When they have learned the desired behavior you can then start rewarding them less frequently and they will maintain the behavior. You will have to keep occasionally rewarding correct behavior though to keep the behavior from eventually going away.
The way they handled this ongoing occasion reward was to use groups of 3 pigeons. The part rejection system was modified to go with a majority vote. Whenever it was not unanimous the 2 pigeons in the majority got a reward. This happened frequently enough to keep the behavior from going extinct in the birds, but infrequently enough to avoid fat pigeons.
Once they had 3 pigeons trained by a human deciding on the rewards during the initial training when you need frequent rewards and got them so they were working great on the line, they could use those 3 to train more. They did that by adding the trainee as a 4th member of the group. The trainee's vote was not counted, but if the other 3 were unanimous and the trainee agreed the trainee was rewarded. This produced the frequent rewards needed to establish the behavior.
The groups of 3 pigeons could do this all day with an error rate orders of magnitude lower than the error rate of the human part inspector. The human was good at the start of a shift, but rapidly get worse after as their shift goes on.
Ultimately the company that had let the researchers try this decided not to actually have it used in production. They felt that no matter how much better the pigeons did and how much they publicly documented that fact ads from competitors about how that company is using birds to inspect their parts would cost too many sales.
See also:
>Jack (died 1890) was a Chacma baboon who was an assistant to a disabled railway signalman, James Wide, in South Africa.
>Jack was the pet and assistant of double leg amputee signalman James Wide, who worked for the Cape Town–Port Elizabeth Railway service. James "Jumper" Wide had been known for jumping between railcars until an accident where he fell and lost both of his legs below the knee. To assist in performing his duties, Wide purchased Jack in 1881, and trained him to push his wheelchair and to operate the railways signals under supervision.
>An official investigation was initiated after someone reported that a baboon was observed changing railway signals at Uitenhage near Port Elizabeth.
>After initial skepticism, the railway decided to officially employ Jack once his job competency was verified. He was paid twenty cents a day, and half a bottle of beer each week. It is widely reported that in his nine years of employment with the railway company, Jack never made a single mistake.
Maybe human part inspectors should get treats for inspecting parts correctly
> And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.
Similarly Tesla using Teleoperators for their Optimus robots is a safety fake for robots that are not autonomous either. They are constantly trying to cover there inability to make autonomous anything. Cheap lidars or radar would have likely prevented those "hitting stationary objects" accidents. Just because the Furher says it does not make it so.
So the trillion dollar company deployed 1 ton robots in unconstrained public spaces with inadequate safety data and chose to use objectively dangerous and unsafe testing protocols that objectively heightened risk to the public to meet marketing goals? That is worse and would generally be considered utterly depraved self-enrichment.
We also dump chemicals into the water, air, and soil that aren't great for us.
Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.
EU has one good regulation ... if safety can be engineered in it must be.
OSAH also has regulation to mitigate risk ... tag and lock out.
Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.
Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.
Yes, those are bad as well. Are you seriously taking as your moral foundation that we need to poison the water supply to ensure executives get their bonuses? Is that somehow not utterly depraved self-enrichment?
Sorry, that didn't translate well. I'm not in favor of it. I'm simply saying that many many many companies operate under the condition that external problems are a natural part of doing business.
To be clear, I'm not in support of dumping chemicals into the world, just calling out that experimenting on the public with large robotic cars is perfectly in line with American business practice.
They had supervisors in the passenger seat for a whole but moved them back to the drivers seat, then moved some out to chase cars. In the ones where they are in driver seat they were able to take over the wheel weren't they?
That just makes the Robotaxi even more irresponsible.
I think they were so used to defending Autopilot that they got confused.
Living, breathing drivers have incentives not to crash
Gigantic lithium batteries on wheels guided by WIP software do not
I would guess the FSD numbers get help from drivers taking over during difficult situations and use weighted towards highway miles?
not to mention turning off FSD milliseconds before impact
To be fair to Tesla and other self driving taxis, urban and shorter journeys usually have worse collision rates than the average journey - and FSD is likely to be owners driving themselves to work etc.
Great, we can use Tesla's own numbers once again by selecting non-highway. Average human is 178,000 non-highway miles per minor collision resulting in "Professional Driver + Most Advanced 'Robotaxi' FSD version under test with careful scrutiny" at 3x worse than the average non-professional driver alone.
They advertise and market a safety claim of 986,000 non-highway miles per minor collision. They are claiming, risking the lives of their customers and the public, that their objectively inferior product with objectively worse deployment controls is 1,700% better than their most advanced product under careful controls and scrutiny when there are no penalties for incorrect reporting.
Would be nice if we had a functioning legislative body that did more than pass a single "give billionaires more tax breaks" bill each term.
It is kind of comparing apples to oranges. The more appropriate would be to compare it with other Taxis.
https://www.rubensteinandrynecki.com/brooklyn/taxi-accident-...
Generally about 1 accident per 217k miles. Which still means that Tesla is having accidents at a 4x rate. However, there may be underreporting and that could be the source of the difference. Also, the safety drivers may have prevented a lot of accidents too.
I'm sure insurers will love your arguments and simply insure Tesla at the exact same rate they insure everyone else.
I think Tesla's egg is cooked. They need a full suite of sensors ASAP. Get rid of Elon and you'll see an announcement in weeks.
Large fleet operators tend to self insure rather than having traditional auto insurance for what it's worth.
If you have a large fleet, say getting in 5-10 accidents a year, you can't buy a policy that's going to consistently pay out more than the premium, at least not one that the insurance company will be willing to renew. So economically it makes sense to set that money aside and pay out directly, perhaps covering disastrous losses with some kind of policy.
> I'm sure insurers will love your arguments and simply insure Tesla at the exact same rate they insure everyone else.
Insurers would charge 4 times as much for insurance I think. Which matches what I've seen when quoting insurance for Teslas before.
Always comes up but think it's worth repeating: if he's not there the stock will take a massive haircut and no Tesla investor wants that regardless of whether it would improve Tesla's car sales or its self-driving. Elon is the stock price for the most part. And just to muse on the current reason, it's not Optimus or self driving, but an eventual merger with SpaceX. My very-not-hot take is that they'll merge within months of the SpaceX IPO. A lot of folks say it ain't happening, but I think that's entirely dependent on how well Elon and Trump are getting along at the moment the merger is proposed (i.e., whether Trump gives his blessing in advance of any announcement).
Tesla's only chance at this point is government money. Consumers just aren't buying. It doesn't help that Elon was heavily involved with Epstein and is constantly spouting white nationalist propaganda on X. This is on top of his gaffe with "My Heart Goes Out to You". Only a certain type of consumer is going to buy from a company like that.
Yup as context, in the same time Waymo had 101 collisions according to the same NHTSA dataset.
Waymo drives 4 million miles every week (500k+ miles each day). Vast majority of those collisions are when Waymos were stationary (they don’t redact narrative in crash reports like Tesla does, so you know what happened). That is an incredible safety record.
Is this the same time or the same miles driven? I think the former, and of course I get that's what you wrote, but I'm trying to understand what to take away from your comment.
Comment was deleted :(
The old FSD was mostly used on freeways that naturally have a much lower incident rate per mile. And a lot of incidents that happen are caused by inattention/fatigue.
So this number is plausible.
I only flip on FSD when on the highway. It has come a long way but still too many problems on local roads.
The problem Tesla faces and their investors are unaware of, is that just because you have a Model Y that has driven you around for thousands of miles without incident does not mean Tesla has autonomous driving solved.
Tesla needs their FSD system to be driving hundreds of thousands of miles without incident. Not the 5,000 miles Michael FSD-is-awesome-I-use-it-daily Smith posts incessantly on X about.
There is this mismatch where overly represented people who champion FSD say it's great and has no issues, and the reality is none of them are remotely close to putting in enough miles to cross the "it's safe to deploy" threshold.
A fleet of robotaxis will do more FSD miles in an afternoon than your average Tesla fanatic will do in a decade. I can promise you that Elon was sweating hard during each of the few unsupervised rides they have offered.
> hundreds of thousands of miles without incident
Almost there. Humans kill one person every 100 million miles driven. To reach mass adoption, self-driving car need to kill one every, say, billion miles. Which means dozens or hundreds of billions miles driven to reach statistical significance.
> to reach mass adoption, self-driving car need to kill one every, say, billion miles
They need to be around parity. So a death every 100mm miles or so. The number of folks who want radically more safety are about balanced by those who want a product in market quicker.
> They need to be around parity.
I don't think so.
The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people. The negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality.
I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen. Maybe not a full order of magnitude safer, but I think it will need to be clearly safer than human drivers and not just at parity.
> negative PR from self-driving accidents will be much worse for every single fatal collision than a human driven fatality
We're speaking in hypotheticals about stuff that has already happened.
> I think these things genuinely need to be significantly safer for society to be willing to tolerate the accidents that do happen
I used to as well. And no doubt, some populations will take this view.
They won't have a stake in how self-driving cars are built and regulated. There is too much competition between U.S. states and China. Waymo was born in Arizona and is no growing up in California and Florida. Tesla is being shaped by Texas. The moment Tesla or BYD get their shit together, we'll probably see federal preëmption.
(Contrast this with AI, where local concerns around e.g. power and water demand attention. Highways, on the other hand, are federally owned. And D.C. exerting local pressure with one hand while holding highway funds in the other is long precedented.)
> The deaths from self-driving accidents will look _strange_ and _inhuman_ to most people.
I like to quip that error-rate is not the same as error-shape. A lower rate isn't actually better if it means problems that "escape" our usual guardrails and backup plans and remedies.
You're right that some of it may just be a perception-issue, but IMO any "alien" pattern of failures indicates that there's a meta-problem we need to fix, either in the weird system or in the matrix of other systems around it. Predictability is a feature in and of itself.
I know this sounds bad, but I wonder if you put an LLM in the vehicle that can control basic stuff (like the radio, climate controls, windows, change destination, maybe friendly chatter) but no actual vehicle control, people will humanize the car and be much more forgiving of mistakes. I feel pretty certain that they would..
Comment was deleted :(
About half of road deaths involve drivers who are drunk or high. But only a very small fraction of drivers drive drunk or high - 50% of deaths are caused by 2% of drivers.
A self-driving car that merely achieves parity would be worse than 98% of the population.
Gotta do twice the accident-free mileage to achieve parity with the sober 98%.
I disagree. The 1:100M statistic is too broad, and includes many extremely unsafe drivers. If we restrict our data to only people who drive sober, during normal weather conditions, no speed racing or other deliberately unsafe choices, what is the expected number of miles per fatality?
1 in a billion might be a conservative target. I can appreciate that statistically, reaching parity should be a net improvement over the status quo, but that only works if we somehow force 100% adoption. In the meantime, my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's.
> I disagree. The 1:100M statistic is too broad, and includes many extremely unsafe drivers
To be clear, I'm not arguing for what it should be. I'm arguing for what it is.
I tend to drive the speed limit. I think more people should. I also recognise there is no public support for ticketing folks going 5 over.
> my choice to use a self-driving car has to assess its risk compared to my driving, not the drunk's
All of these services are supply constrained. That's why I've revised my hypothesis. There are enough folks who will take that car before you get comfortable who will make it lucrative to fill streets with them.
(And to be clear, I'll ride in a Waymo or a Cybercab. I won't book a ride with a friend or my pets in the latter.)
This gets near something I was thinking about. Most of the numbers seem to assume that injuries, injury severity, and deaths are all some fixed proportion of each other. But is that really true in the context of self-driving cars of all types?
It seems reasonable that the deaths and major injuries come highly disproportionally from excessively high speed, slow reaction times at such speeds, going much too fast for conditions even at lower absolute speeds. What if even the not very good self-driving cars are much better at avoiding the base conditions that result in accidents leading to deaths, even if they aren't so good at avoiding lower-speed fender-benders?
If that were true, what would that mean to our adoption of them? Maybe even the less-great ones are better overall. Especially if the cars are owned by the company, so the costs of any such minor fender-benders are all on them.
If that's the case, maybe Tesla's camera-only system is fairly good actually, especially if it saves enough money to make them more widespread. Or maybe Waymo will get the costs of their more advanced sensors down faster and they'll end up more economical overall first. They certainly seem to be doing better at getting bigger faster in any case.
> To reach mass adoption, self-driving car need to kill one every, say, billion miles.
Important correction “kill one or less, per billion miles”. Before someone reluctantly engineers an intentional sacrifice to meet their quota.
> Important correction “kill one or less, per billion miles”. Before someone reluctantly engineers an intentional sacrifice to meet their quota.
Pedantic correction: "kill one or fewer, per billion miles"
Almost - fatalities are obviously important, but not the only metric.
You can prove Tesla's system is a joke with a magnitude of metrics.
A death is a catastrophic case, but even a mild collision with bumps and bruises to the people involved would set back Tesla years.
People have an expectation that self driving cars will be magical in ability. Look at the flac waymo has received despite it's most egregious violations being fender bender equivalents
Yeah, my response is to say some version of “you’re bringing anecdote knives to a statistics gunfight”
Tesla's Robotaxis are bringing a bad name to the entire field of autonomous driving. The average consumer isn't going to make a distinction between Tesla vs. Waymo. When they hear about these Robotaxi crashes, they will assume all robotic driving is crash prone, dangerous and irresponsible.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
I think they do. That's the whole point of brand value.
Even my non-tech friends seem to know that with self-driving, Waymo is safe and Tesla is not.
Yep. Especially when one of the brands is Tesla.
Once Elon put himself at the epicenter of American political life, Tesla stopped being treated as a brand, and more a placeholder for Elon himself.
Waymo has excellent branding and first to market advantage in defining how self-driving is perceived by users. But, the alternative being Elon's Tesla further widens the perception gap.
I think the Tesla brand and the Elon brand have always been attached at the hip. This was fine when the Elon brand was "eccentric founder who likes memes, wants to reduce our dependence on fossil fuels, and plans to launch a Mars colony." It only became a marketing problem when he went down the right wing rabbit hole and started sieg heiling on stage.
He has proven to be untrustworthy much longer than his trip down the right wing rabbit hole. For me, it started when he through out the accusation of pedophilia against the cave diver trying to rescue students. And since then it's become clear that he will say whatever he wants without regard for reality in any meaninful way. Whether it was promising FSD over a decade ago, which he still hasn't delivered, lying about video game proficiency, or even his non-sensical statements about twitter technology after he acquired the company, it's clear that he's entered the realm where consequences don't really matter to him and he will say or do whatever he wants. There is no trust to be found there.
I’m not so sure. I think Tesla is so tied up in Musk’s personality that Tesla and Waymo aren’t in the same field, likewise with Optimus. Tesla isn’t self-driving, it is Tesla. Especially now that many mainstream vehicles ship with various levels of self-driving, a lot of people have a lot of exposure to it. Tesla has the best brand recognition but they no longer define the product. Tesla is Tesla, Waymo is self-driving.
Most people are able to be more nuanced than your typical hn zealot. They strongly dislike Musk, but are begrudgingly able to give credit where credit is due wrt Tesla, SpaceX, etc.
I really don't think that's true. Think Uber vs. Lyft. I know I distinguish between the two even if the experience is usually about the same and people I know where this has come up in conversation generally see Lyft as "off-brand" and a little more skeevy. They only take Lyfts when it's cheaper or quicker than Uber.
I'm probably not the average consumer in this situation but I was in Austin recently and took both Waymo and Robotaxi. I significantly preferred the Waymo experience. It felt far more integrated and... complete? It also felt very safe (it avoided getting into an accident in a circumstance where I certainly would have crashed).
I hope Tesla gets their act together so that the autonomous taxi market can engage in real price discovery instead of "same price as an Uber but you don't have to tip." Surely it's lower than that especially as more and more of these vehicles get onto the road.
Unrelated to driving ability but related to the brand discussion: that graffiti font Tesla uses for Cybertruck and Robotaxi is SO ugly and cringey. That alone gives me a slight aversion.
I worked in some fully autonomous car projects back in ~2010. I would say every single company and the industry at large felt HUGE pressure to not have any incidents, as a single bad incident from one company can wreck the entire initiative.
Which is ironic as human driven vehicle collisions are so common-place that they don't even make the news.
yes, I talk to people and they have confidence in tesla. But then I mention that waymo is level 4 and tesla is level 2, and it doesn't make any difference.
I don't know what a clear/direct way of explaining the difference would be.
Yep, feels a lot like that submarine that got crushed trying to get to the Titanic a year or two ago. It made the entire marine industry look worse, and other companies making submarines were concerned it would hurt their business.
Inb4: not remotely in the marine field, so a genuine question. Would it really make an impact?
Robotaxis market is much broader than the submersibles one, so the effect of consumers' irrationality would be much bigger there. I'd expect an average customer of the submarines market to do quite a bit more research on what they're getting into.
Having the whole world meming on rich dudes in submarines could plausibly make the whole industry seem less cool to people with the money to buy even a good submarine. Imagine being a rich dude with a new submarine and everybody you talk to about it snickers about you getting crushed like Stockton. Maybe you'd just buy a bigger yacht and skip the submarine, which you were probably only buying for the cool factor in the first place...
The difference is the OceanGate Titan failure only harmed those who didn't do their due diligence and the grossly negligent owner. The risk was contained to those who explicitly opted in. In this case, Tesla Robotaxis harm others to keep Tesla's valuation and share price propped up. The performance art is the investor relations.
This is actually a rational explanation for this. Perhaps Elon wants to sink the whole industry until he can actually build a self driving car like Waymo's.
Perhaps he's bad at his job
He wants to break trust in the whole industry by giving Tesla a massive black eye, undoubtedly hurting their stock and sales significantly, in order to, later, create actual self driving cars into the market that he's already poisoned?
Totally rational.
Elon's drug addled brain doesn't make rational decisions.
Well, admittedly maybe I should have said "rational to Elon on Ketamine"
> are bringing a bad name to the entire field of autonomous driving.
A small number of humans bring a bad name to the entire field of regular driving.
> The average consumer isn't going to make a distinction between Tesla vs. Waymo.
What's actually "distinct?" The secret sauce of their code? It always amazed me that corporate giants were willing to compete over cab rides. It sort of makes me feel, tongue in cheek, that they have fully run out of ideas.
> they will assume all robotic driving is crash prone
The difference in failure modes between regular driving and autonomous driving is stark. Many consumers feel the overall compromise is unviable even if the error rates between providers are different.
Watching a Waymo drive into oncoming traffic, pull over, and hear a tech support voice talk to you over the nav system is quite the experience. You can have zero crashes, but if your users end up in this scenario, they're not going to appreciate the difference.
They're not investors. They're just people who have somewhere to go. They don't _care_ about "the field". Nor should they.
> dangerous and irresponsible.
These are, in fact, pilot programs. Why this lede always gets buried is beyond me. Instead of accepting the data and incorporating it into the world view here, people just want to wave their hands and dissemble over how difficult this problem _actually_ is.
Hacker News has always assumed this problem is easy. It is not.
> Hacker News has always assumed this problem is easy. It is not.
That’s the problem right there.
It’s EXTREMELY hard.
Waymo has very carefully increased its abilities, tip-toeing forward little by little until after all this time they’ve achieved the abilities they have with great safety numbers.
Tesla appears to continuously make big jumps they seem totally unprepared for yelling “YOLO” and then expect to be treated the same when it doesn’t work out by saying “but it’s hard.”
I have zero respect for how they’ve approached this since day 1 of autopilot and think what they’re doing is flat out dangerous.
So yeah. Some of us call them out. A lot. And they seem to keep providing evidence we may be right.
I’ve often felt that much of the crowd touting how close the problem was to being solved was conflating a driving problem to just being a perception problem. Perception is just a sub-space of the driving problem.
Genuine question though: has Waymo gotten better at their reporting? A couple years back they seemingly inflated their safety numbers by sanitizing the classifications with subjective “a human would have crashed too so we don’t count it as an accident”. That is measuring something quite different than how safety numbers are colloquially interpreted.
It seems like there is a need for more standardized testing and reporting, but I may be out of the loop.
> achieved the abilities they have with great safety numbers.
Driving around in good weather and never on freeways is not much of an achievement. Having vehicles that continually interfere in active medical and police cordons isn't particularly safe, even though there haven't been terrible consequences from it, yet.
If all you're doing is observing a single number you're drastically under prepared for what happens when they expand this program beyond these paltry self imposed limits.
> Some of us call them out.
You should be working to get their certificate pulled at the government level. If this program is so dangerous then why wouldn't you do that?
> And they seem to keep providing evidence we may be right.
It's tragic you can't apply the same logic in isolation to Waymo.
Freeways are far easier for a robot to drive on than streets. Driving on freeways would significantly lower Waymo's accident per mile rate.
The difference is that accidents on a freeway are far more likely to be fatal than accidents on a city street.
Waymo didn't avoid freeways because they were hard, they avoided them because they were dangerous.
> Driving on freeways would significantly lower Waymo's accident per mile rate.
Maybe. We don’t know for sure.
You seem to frame that a bit like Waymo is cheating or padding their numbers.
But I see that as them taking appropriate care and avoiding stupid risks.
Anyway as someone else pointed out they recently started doing freeways in Austin so we’ll know soon.
> You seem to frame that a bit like Waymo is cheating or padding their numbers.
Not sure how you read that. I'm saying Waymo was prioritizing safety.
Oh, sorry. I thought you were arguing from the other side, saying that Waymo only looked good because they were avoiding anything difficult.
Same argument, different sentiment.
I said they were avoiding the easy thing.
Freeway accidents, due to their nature, are a lot harder to ignore and underreport than accidentally bumping or scraping into another car at low speeds. It's like using murder rates to estimate real crime rates because murders, unlike most other crimes, are far more likely to be properly documented.
Waymo started rolling out freeway trips in some cities late last year
Elon definitely has this cult of personality around him where people will jump in and defend his companies (as a stand-in for him) on the internet, even in the face of some common sense observations. I don't get the sense that anything you've said is particularly reasonable outside of being lured in by Elon's personality.
This is absolutely true. There is a flip side however, where people who dislike Elon Musk will sometimes talk up his competitors, seemingly for no good reason other than them being at least nominally competitors to Musk companies. Nikola and Spinlaunch are two that come to mind; quite blatant scams that have gotten far too much attention because they aren't Musk companies.
Tesla FSD is crap. But I also think we wouldn't see quite so much praise of Waymo unless Tesla also had aspirations in this domain. Genuinely, what is so great about a robo taxi even if it works well? Do people really hate immigrants this much?
I think we’d see praise, but maybe not as much. Every time it’s clear Tesla screwed up it’s an incredibly obvious thing to do to compare them to the number one self driving car out there. Tesla provides such an obvious anchor point for comparisons it’s really hard for Waymo not to come out on top.
What’s so great about a robotaxi even if it works well? It’s neat. As a technology person I like it exists. I don’t know past that. I’ve never used one they’re not deployed where I live.
It isn't about hatred of the human drivers for me. Waymo's service is so safe and consistent that I would trust my 10-yr-old to take a ride in it solo if it were permitted by the ToS. Most Uber/Lyft/etc. rides are just as safe, but due to the inconsistency I would never reach that level of trust.
I don't live in a covered area, but when I am in range I will gladly pay 10-20% more for a Waymo ride than an Uber/Lyft/etc.
Kind of like how people maintained that LLMs were trash well past the point where it was obvious that that wasn't true anymore, I often wonder how many people who talk confidently about Tesla FSD have actually used a recent version. Because when we tried a recent FSD and Waymo, we found FSD to be excellent in handling pretty complex scenarios, including one of the worst, a busy airport loop, and we found Waymo to behave a bit weirdly (but still good). But FSD clearly isn't the dumpster fire that people try to make it out to be. v12 was a bit sketchy, and I was too nervous to use it past the first couple of times I tried it, but v14 is great.
Waymo overall has a FANTASTIC safety record and has been improving steadily. You can't say the same about Tesla's FSD and Robotaxi.
LIDAR gives Waymo a fundamental advantage.
dunning-kruger effect at the corporate level?
I said in earlier reports about this, it's difficult to draw statistical comparisons with humans because there's so little data. Having said that, it is clear that this system just isn't ready and it's kind of wild that a couple of those crashes would've been easily preventable with parking sensors that come equipped as standard on almost every other car.
In some spaces we still have rule of law - when xAI started doing the deepfake nude thing we kind of knew no one in the US would do anything but jurisdictions like the EU would. And they are now. It's happening slowly but it is happening. Here though, I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
> the deepfake nude thing
the issue is that these tools are widely accessible, and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years
due to the current regulatory environment (trump admin), there is no political will to tackle new laws.
> I just don't know if there's any institution in the US that is going to look at this for what it is - an unsafe system not ready for the road - and take action.
unlike deepfakes, there are extensive road safety laws and civil liability precedent. texas may be pushing tesla forward (maybe partially for ideological reasons), but it will be an extremely hard sell to get any of the major US cities to get on board with this.
so, no, i don't think you will see robotaxis on the roads in blue states (or even most red states) any time soon.
> legal liability is on the person who posts it, not who hosts the tool.
In the specific case of grok posting deepfake nudes on X. Doesn't X both create and post the deepfake?
My understanding was, Bob replies in Alice's thread, "@grok make a nude photo of Alice" then grok replies in the thread with the fake photo.
That specific action is still instigated by Bob.
Where grok is at risk is not responding after they are notified of the issue. It’s trivial for grock to ban some keywords here and they aren’t, that’s a legal issue.
Sure Bob is instigating the harassment, then X.com is actually doing the harassment. Or at least, that's the case plaintiff's attorneys are surely going to be arguing.
I don't see how it's fundamentally any different to mailing someone harassing messages or distressing objects.
Sure, in this context the person who mails the item is the one instigating the harassment but it's the postal network that's facilitating it and actually performing the "last mile" of harassment.
The very first time it happened X is likely off the hook.
However notification plays a role here, there’s a bunch of things the post office does if someone tries to use them to do this regularly and you ask the post office to do something. The issue therefore is if people complain and then X does absolutely nothing while having a plethora of reasonable options to stop this harassment.
https://faq.usps.com/s/article/What-Options-Do-I-Have-Regard...
You may file PS Form 1500 at a local Post Office to prevent receipt of unwanted obscene materials in the mail or to stop receipt of "obscene" materials in the mail. The Post Office offers two programs to help you protect yourself (and your eligible minor children).
Grok posts the pictures publicly, everyone can see them.
The postal network transports a letter, and only the person reading the letter can see the contents.
These situations are in no way comparable.
The difference is the post office isn't writing the letter.
Comment was deleted :(
if grok never existed and X instead ran a black-box-implementation "press button receive CP" webapp, X would be legally culpable and liable each time a user pressed the button, for production plus distribution
the same is true if the webapp has a blank "type what you want I'll make it for you" field and the user types "CP" and the webapp makes it.
> so, no, i don't think you will see robotaxis on the roads in blue states
Truly baffled by this genre of comment. "I don't think you will see <thing that is already verifiably happening> any time soon" is a pattern I'm seeing way more lately.
Is this just denying reality to shape perception or is there something else going on? Are the current driverless operations after your knowledge cutoff?
robotaxi is the name of the tesla unsupervised driving program (as stated in the title of this hn post) and if you live in a parallel reality where they're currently operating unsupervised in a blue state, or if texas finally flipped blue for you, let me know how's going for you out there!
for the rest of us aligned to a single reality, robotaxis are currently only operating as robotaxis (unsupervised) in texas (and even that's dubious, considering the chase car sleight of hand).
of course, if you want to continue to take a weasely and uncharitable interpretation of my post because i wasn't completely "on brand", you are free to. in which case, i will let you have the last word, because i have no interest in engaging in such by-omission dishonesty.
You were using it in the general sense, unless you believe the "extensive road safety laws and civil liability precedent" only apply to Tesla branded Robotaxis. If you do believe that I'd love to hear why.
I'm not one to nitpick grammar but if you want to convey something is a proper noun you capitalize it.
> robotaxi is the name of the tesla unsupervised driving program
“robotaxi” is a generic term for (when the term was coined, hypothetical) self-driving taxicabs, that predates Tesla existing. “Tesla Robotaxi” is the brand-name of a (slightly more than merely hypothetical, today) Tesla service (for which a trademark was denied by the US PTO because of genericness). Tesla Robotaxi, where it operates, provides robotaxis, but most robotaxis operating today are not provided by Tesla Robotaxi.
> Tesla 'Robotaxi' adds 5 more crashes in Austin in a month – 4x worse than humans
hm yes i can see where the confusion lies
Comment was deleted :(
Comment was deleted :(
Just because someone tells you to produce child pornography you don't have to do it just because you are able to. Other model providers don't have the problem...
that is an ethical and business problem, not entirely a legal problem (currently). hopefully, it will universally be a legal problem in the near future, though. and frankly, anyone paying grok (regardless of their use of it) is contributing to the problem
It is not ethical to wait for legal solutions and in the meantime just producing fake child pornography with your AI solution.
Legal things are amoral, amoral things are legal. We have a duty to live morally, legal is only words in books.
I live morally. I assume you do - the vast vast majority of reading this comment will not ask AI to produce child porn. However a small minority will, which is why we have laws and police.
If you have to wait for the government to tell you to stop producing CP before you stop, you are morally bankrupt.
It's only an ethics and business problem if the produced images are purely synthetic and in most jurisdictions even that is questionable. Grok produced child pornography of real children which is a legal problem.
>and at the federal level, the legal liability is on the person who posts it, not who hosts the tool. this was a mistake that will likely be corrected over the next six years
[citation needed]
Historically hosts have always absolutely been responsible for the materials they host, see DMCA law, CSAM case law...
no offense but you completely misinterpreted what i wrote. i didnt say who hosts the materials, i said who hosts the tool. i didnt mention anything about the platform, which is a very relevant but separate party.
if you think i said otherwise, please quote me, thank you.
> Historically hosts have always absolutely been responsible for the materials they host,
[citation needed] :) go read up on section 230.
for example with dmca, liability arises if the host acts in bad faith, generates the infringing content itself, or fails to act on a takedown notice
that is quite some distance from "always absolutely". in fact, it's the whole point of 230
pedantically correct, but there is a good argument that if you host an AI tool that can easially be made to make child porn that no longer applies. a couple years ago when AI was new you could argue that you never thought anyone would use your tool to create child porn. However today it is clear some people are doing that and you need to prevent that.
Note that I'm not asking for perfection. However if someone does manage to create child porn (or any of a number of currently unspecified things - the list is likely to grow over the next few years), you need to show that you have a lot of protections in place and they did something hard to bypass them.
>it's difficult to draw statistical comparisons [...] because there's so little data
That ain't true [1].
> it's kind of wild that a couple of those crashes would've been easily preventable with parking sensors that come equipped as standard on almost every other car
Teslas are really cheaply made, inadequate cars by modern standards. The interiors are terrible and are barebones even compared to mainstream cars like a Toyota Corolla. And they lack parking sensors depending on the version you bought. I believe current models don’t come with a surround view camera either, which is almost standard on all cars at this point, and very useful in practice. I guess I am not surprised the Robotaxis are also barebones.
Its not ever going to get ready.
Getting this to a place where it is better than humans continuously is not equivalent to fixing bugs in the context of the production of software used on phones etc.
When you are dealing with a dynamic uncontained environment it is much more difficult.
Waymo is in a place where it's better than humans continuously. If Tesla is not, that's on them, either because their engineers are not as good or because they're forced to follow Elon's camera-only mandate.
If you ride enough Waymo, you will realize it is a far more cautious driver than a human but not a better driver. If you need to get somewhere even at an average speed, you still take uber/lyft.
Waymo still takes many wrong turns and can easily get stuck in situations where a human would not.
citation needed. Waymo says they are better, but it is really hard to find someone without a conflict of interest who we can believe has and understands the data.
I reject the premise of your comment. If Tesla wants to convince people that Robotaxi is safe, it's on them to publish an analysis with comparative data and stop redacting the crash details that Waymo freely provides. Until they do, it's reasonable to follow the source article's simple math and unreasonable to declare that there's no way to be sure because there might be some unknown factor it's not accounting for.
That is a first step - but anything published by tesla or someone they pay is suspect and so independant annalisys is mandatory. That includes some data gathering to ensure the data tesla/waymo gives is correct.
It's the camera-only mandate, and it's not Elon's but Karpathy's.
Any engineering student can understand why LIDAR+Radar+RGB is better than just a single camera; and any person moderately aware of tech can realize that digital cameras are nowhere as good as the human eye.
But yeah, he's a genius or something.
Digital cameras are much worse than the human eye, especially when it comes to dynamic range, but I don't think that's all that widely known actually. There are also better and worse digital cameras, and the ones on a Waymo are very good, and the ones on a Tesla aren't that great, and that makes a huge difference.
Beyond even the cameras themselves, humans can move their head around, use sun visors, put on sunglasses, etc to deal with driving into the sun, but AVs don't have these capabilities yet.
> especially when it comes to dynamic range
You can solve this by having multiple cameras for each vantage point, with different sensors and lenses that are optimized for different light levels. Tesla isn't doing this mind you, but with the use of multiple cameras, it should be easy enough to exceed the dynamic range of the human eye so long as you are auto-selecting whichever camera is getting you the correct exposure at any given point.
Tesla claims that their cameras use "photon counting" and that this lets them see well in the dark, in fog, in heavy rain, and when facing bright lights like the sun.
Photon counting is a real thing [1] but that's not what Tesla claims to be doing.
I cannot tell if what they are doing is something actually effective that they should have called something other than "photon counting" or just the usual Musk exaggerations. Anyone here familiar with the relevant fields who can say which it is?
Here's what they claim, as summarized by whatever it is Google uses for their "AI Overview".
> Tesla photon counting is an advanced, raw-data approach to camera imaging for Autopilot and Full Self-Driving (FSD), where sensors detect and count individual light particles (photons) rather than processing aggregate image intensity. By removing traditional image processing filters and directly passing raw pixel data to neural networks, Tesla improves dynamic range, enabling better vision in low light and high-contrast scenarios.
It says these are the key aspects:
> Direct Data Processing: Instead of relying on image signal processors (ISPs) to create a human-friendly picture, Tesla feeds raw sensor data directly into the neural network, allowing the system to detect subtle light variations and near-IR (infrared) light.
> Improved Dynamic Range: This approach allows the system to see in the dark exceptionally well by not losing information to standard image compression or exposure adjustments.
> Increased Sensitivity: By operating at the single-photon level, the system achieves a higher signal-to-noise ratio, effectively "seeing in the dark".
> Elimination of Exposure Limitations: The technique helps mitigate issues like sun glare, allowing for better visibility in extreme lighting conditions.
> Neural Network Training: The raw, unfiltered data is used to train Tesla's neural networks, allowing for more robust, high-fidelity perception in complex, real-world driving environments.
all the sensor has to do is keep count of how many times a pixel got hit by a photon in the span of e.g. 1/24th of a second (long exposure) and 1/10000th of a second (short exposure). Those two values per pixel yield an incredible dynamic range and can be fed straight into the neural net.
https://www.sony-semicon.com/files/62/pdf/p-15_IMX490.pdf
The IMX490 has a dynamic range of 140dB when spitting out actual images. The neural net could easily be trained on multiexposure to account for both extremely low and extremely high light. They are not trying to create SDR images.
Please lets stop with the dynamic range bullshit. Point your phone at the sun when you're blinded in your car next time. Or use night mode. Both see better than you.
I have enjoyed Karpathy's educational materials over the years, but somehow missed that he was involved with Tesla to this degree. This was a very insightful comment from 9 years ago on the topic:
> What this really reflects is that Tesla has painted itself into a corner. They've shipped vehicles with a weak sensor suite that's claimed to be sufficient to support self-driving, leaving the software for later. Tesla, unlike everybody else who's serious, doesn't have a LIDAR.
> Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.
> ...
https://news.ycombinator.com/item?id=14600924
Karpathy left in 2022. Turns out that the commenter, Animats, is John Nagle!
Using only cameras is a business decision, not tech decision: will camera + NN be good enough before LIDAR+Radar+RGB+NN can scale up.
For me it looks like they will reach parity at about the same time, so camera only is not totally stupid. What's stupid is forcing robotaxi on the road before the technology is ready.
Clearly they have not reached parity, as evidenced by the crash rate of Tesla.
It's far from clear that the current HW4 + sensor suite will ever be sufficient for L4.
>reach parity at about the same time
Nah, Waymo is much safer than Tesla today, while Tesla has way-mo* data to train on and much more compute capacity in their hands. They're in a dead end.
Camera-only was a massive mistake. They'll never admit to that because there's now millions of cars out there that will be perceived as defective if they do. This is the decision that will sink Tesla to the ground, you'll see. But hail Karpathy, yeah.
* Sorry, I couldn't resist.
Was Karpathy "fired" from Tesla because he could not make camera only work ?
Or did he "resign" since Elon insists on camera-only and Karpathy says i cant do it?
It's clear that camera-only driving is getting better as we have better image understanding models every year. So there will be a point when camera based systems without lidars will get better than human drivers.
Technology is just not there yet, and Elon is impatient.
Then stop deploying camera only systems until that time comes.
Waymo could be working on camera only. I don’t know. But it’s not controlling the car. And until such a time they can prove with their data that it is just as safe, that seems like a very smart decision.
Tesla is not taking such a cautious approach. And they’re doing it on public roads. That’s the problem.
Lidar and radar will also get better and having all possible sensors will always out perform camera only.
> So there will be a point when camera based systems without lidars will get better than human drivers.
No reason to assume that. A toddler that is increasing in walk speed every month will never be able to outrun a cheetah.
in contrast, a toddler equipped with an ion thruster & a modest quantity of xeon propellant could achieve enough delta-v to attain cheetah-escape velocity, provided the initial trajectory during the first 31 hours of the mission was through a low-cheetah-density environment
That initial trajectory also needs to go through a low air density environment. At normal air density near the surface of the Earth that ion thruster could only get a toddler up to ~10 km/h before the drag force from the air equals the thrust from the ion thruster.
The only way that ion thruster might save the toddler is if it was used to blast the cheetah in the face. It would take a pretty long time to actually cause enough damage to force the cheetah to stop, but it might be annoying enough and/or unusual enough to get it to decide to leave.
> low air density environment. At normal air density near the surface of the Earth that ion thruster could only get a toddler up to ~10 km/h
agreed. this also provides an explanation for the otherwise surprising fact that prey animals in the savannah have never been observed to naturally evolve ion thrusters.
I'm not an Elon fan at all, and I'm highly skeptical of Tesla's robotaxi efforts in general, but the context here is that only one of these seems like a true crash?
I'm curious how crashes are reported for humans, because it sounds like 3 of the 5 examples listed happened at like 1-4 mph, and the fourth probably wasn't Tesla's fault (it was stationary at the time). The most damning one was a collision with a fixed object at a whopping 17 mph.
Tesla sucks, but this feels like clickbait.
To be fair, the article calls that out specifically at the end:
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything. We cannot independently assess whether Tesla’s system was at fault, whether the safety monitor failed to intervene in time, or *whether these were unavoidable situations caused by other road users*. Tesla wants us to trust its safety record while making it impossible to verify.
Agreed. The "Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph" stood out to me in specific. There is no way that any human driver is going to report backing into something at 1 or 2 mph.
While I was living in NYC I saw collisions of that nature all the time. People put a "bumper buddy" on their car because the street parallel parking is so tight and folks "bump" the car behind them while trying to get out.
My guess is that at least 3 of those "collisions" are things that would never be reported with a human driver.
Low mph does not automatically imply that crashes are not serious. It does not say anything about speed of other vehicles. Tesla could be creeping at 2mph into flow of traffic, or it could come at a complete stop after doing that and still be the reason of an accident.
This is with safety drivers. So at this point you can't really make any conclusions about how good the Robotaxi is at avoiding major crashes since those should ideally be handled by the safety drivers. Without the actual data around all driver interventions you cannot make any positive conclusions about safety here.
My suspicion is that these kinds of minor crashes are simply harder to catch for safety drivers, or maybe the safety drivers did intervene here and slow down the car before the crashes. I don't know if that would show in this data.
If you routinely hit other objects, even at 1-4 mph, you are not a good driver.
The average driver also likely hits objects at 1-4 mph at more than 4x the rate they hit things at a severity high enough to generate a police report.
So the average driver is also likely a bad driver by your standard. Your standard seems reasonable.
The data is inconclusive on whether Tesla robotaxi is worse than the average driver.
Unlike humans, Waymo does report 1-4 mph collisions. The data is very conclusive that Robotaxi is significantly worse than Waymo.
I'd be interested in more details about the 17mph collision as well. Was it a dead-center collision with a pole after hard braking? Was it a mirror clip or a curb clip or something similar? There seem to be a wide range of possibilities.
Doesn't matter if you're doing 4mph moving into an intersection where cross traffic is doing 35 or more.
Interesting crash list. A bunch of low speed crashes, one bus hit the Tesla while the Tesla was stationary, and one 17mph into static object (ouch).
For those complaining about Tesla's redactions - fair and good. That said, Tesla formed its media strategy at a time when gas car companies and shorts bought ENTIRE MEDIA ORGs just to trash them to back their short. Their hopefulness about a good showing on the media side died with Clarkson and co faking dead batteries in a roadster test -- so, yes, they're paranoid, but also, they spent years with everyone out to get them.
Which media org was bought for this?
Are you being sarcastic due to Elon buying Twitter to own/control the conversation? He would be a poster child for the bad actions you are describing.
What media company did Ford buy? What about Honda? Or Toyota? On the flip side, I can think of a very specific media site the Elon purchased.
It does not reflect well on Tesla to have failed to update their media structure now that EVs are everywhere and no longer a threat to existing car companies.
EV's are even bigger threat now if you outside regulated bubble in US. everywhere else, china dominates the market with cheaper and cheaper EV's, while EU/US automakers fail to compete. replace tesla with china.
EVs aren't a threat because every automaker now has an EV program and has for years. It's now carmaker vs carmaker, not kind of car vs kind of car.
Literally all the us carmakers are cancelling their EVs.
There’s also one where Tesla hit a parked truck:
“13781-13644 Street, Heavy truck, No injuries, Proceeding Straight (Heavy truck: parked), 4mph, contact area: left”
Do you have documentation of these moves by shorts? I was there day one at /r/realtesla and I know the events that led to the formation of that sub. A lot of what you describe wasn't part of the lore so im curious to fill in the blanks looking back.
Also as a disclaimer I need to know if you were long the stock at the time. Too much distortion caused by both shorts and longs. I wasn't on either side but I learned after many hard years that so much on /r/teslamotors and /r/realtels was just pure nonsense.
Sadly I was not long the stock then. I remember clearly a very bad investment decision day - I got a model 3, loved it, sold my Volvo XC60 Inscription, heretofore my favorite daily driver - and did not buy TSLA stock with the proceeds. Expensive mistake. I don't have documentation, I was just an interested bystander.
it's just HN getting baited for the 100th time by an electrek article.
It's funny how one can see a persecuted underdog in a company that claimed full self driving (coast to coast) almost a decade ago and had not delivered anything close until just last year. I wonder how the folks who bought their "appreciating asset"[1] in 2019 feel about their cars' current value.
[1] https://www.businessinsider.com/musks-claim-teslas-appreciat...
Yeah, you can get a used Tesla for a bag of chips where I am ... and I still wouldn't buy one.
I just got one after the 14.2 update. Best car I've owned, I run >90% self driving. Is it ready for totally autonomous driving? No. It gets confused. They'll get there soon enough.
Not with the non-self-cleaning sensor suite they have right now.
If new, you just funded a narcissistic wanker and his ding-a-ling tribe. Just saying.
If used, good on you. You're not making things much worse. I've seen people cheap out and buy performance diesels as they'd depreciated so much. Picking up a cheapo Tesla is at least better than that sorry outcome. Thanks.
electrec as always.
``` The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds. ```
so in reality one crash with fixed object, the rest is... questionable, and it's not a crash as you portrait. Such statistic will not even go into human reports, as it goes into non driving incidents, parking lot etc.
For everyone's context, in the same time Waymo had 101 collisions according to the same dataset.
What dataset? Isn't the article clearly specified a different number?
Your context sucks, and it's good as a lie.
>Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla’s supervised “
you are talking about 5 incidents, this is not statistics. Its just a fluctuation of random numbers, and random events like bus hits the taxi while idle. It's already 20% of your data is incorrect lol , since it's 1 out of 5.
So far , you can clearly tell : 1. tesla works decent in a limited environment, no crazy patterns 2. It's a limited env that means nothing. Scale is still not there. They ned to prove themself.
I'm no tesla lover, but I doubt that 4mph backing into an object is something that would be reported in a human context so I'm not sure a '4x' number is really comparative vs. sensationalized.
I'm almost certain they aren't comparing crash statistics to the equivalent human taxi context, "professional" drivers.
Electrek is a highly biased source, the editor has a grudge against Elon and Tesla. It's really unfortunate since it used to be one of the best EV sites.
Are the facts presented in the article incorrect?
One of them is a bus hitting a stationary Tesla - hard to paint that as the teslas fault.
A few are low speed reversing into things, the extreme majority of which done by humans are never reported and are not in the dataset comparing how many crashes Tesla have had vs humans.
I would say they’re facts, but they’re being used dishonesty
> One of them is a bus hitting a stationary Tesla - hard to paint that as the teslas fault.
Since the narratives are redacted, who's to say the Tesla didn't change lanes to be in front of the bus, slam on the brakes, then get rear ended?
Or pull partially out of a driveway, stopping and blocking a lane with a bus traveling 35mph in said lane and got hit by it?
> A few are low speed reversing into things, the extreme majority of which done by humans are never reported and are not in the dataset comparing how many crashes Tesla have had vs humans.
I'm sure this happens to humans all the time, but not a single one of those humans would be considered a good (or even decent) driver.
> not a single one of those humans would be considered a good (or even decent) driver.
So is the bar here being a good or decent driver, or being x times worse than the average human?
I see a lot of bar moving.
> So is the bar here being a good or decent driver, or being x times worse than the average human?
> I see a lot of bar moving.
"Less than decent" means "worse than the average human driver".
I've never hit a stationary object, or any object for that matter, in 20 years of driving.
I understand that might not be the same for you. My bar is that it must be better than my own good driving.
That is a completely made up bar that is impossible to test for, and can never be met.
Even Waymo have tons of reported crashes in the same document.
Self driving cars need to be better than the average human - which means less injuries and deaths. Given 100 people will be killed on the road in the US today, it’s actually not a crazy high bar to clear.
You could say the same in reverse about HN.
In reverse? What do you mean exactly?
As someone who is neither an Elon fan nor a hater, it irks me how deranged HN is about anything Musk-related.
Is there evidence of that? No matter who is criticizing Musk's companies, they will get slandered in one way or nother, which doesn't mean Electrek isn't biased.
still is the best ev site. the elon cult is every bit as bad as maga
It's not the best EV site when all of the coverage of the US market leader is so incredibly slanted due to personal politics of the editor.
Tesla is not the market leader in BEVs. It's BYD (2.25m 2025), then Tesla (1.6m), then VW AG (1 million). Given the current growth/shrinkage rates (33% growth for VW AG, 9% drop for Tesla), they'll be third in a year or so.
Of course if you are accostumed with the Elon and Tesla glazing 2002-2024 , critique and scrutiny feels like oppression.
It's always like that. The poor billionaire soon trillionaire is getting bullied by the blogger. Not.
Do you even realize how dumb that sounds?
It's impressive how bad they're at hiring the safety drivers. This is not even measuring how good the Robotaxi itself is, right now it's only measuring how good Tesla is at running this kind of test. This is not inspiring any confidence.
Though maybe the safety drivers are good enough for the major stuff, and the software is just bad enough at low speed and low distance collisions where the drivers don't notice as easily that the car is doing something wrong before it happens.
Did anyone actually read the article before commenting? The crashes were all minor. No injuries. If anything this shows Tesla making an effort to report everything. A 2mph bump isn’t a “crash” it’s barely anything. The 17mph collision may have caused some minor damage to the “fixed object” but not clear from the article.
> Did anyone actually read the article ... If anything this shows Tesla making an effort to report everything.
Meanwhile, the article if you read it
> What makes this especially frustrating is the lack of transparency. Every other ADS company in the NHTSA database, Waymo, Zoox, Aurora, Nuro, provides detailed narratives explaining what happened in each crash. Tesla redacts everything.
A 2mph bump isn’t nothing. If it failed to stop it can trample people. It can still do damage to elderly or disabled people. The 17mph collision may have caused some minor damage to the “fixed object” but that fixed object could've been someone standing still. Tesla is not making an effort, they're doing the bare minimum.
And they're making them without steering wheels now! I'm sure there's a saying about that.
https://electrek.co/2026/02/17/tesla-rolls-first-steering-wh...
So, are we at 50% of US population, without the safety drivers, before end of 2025 yet ?
The sheer scale and financial consequences (once it implodes) of Tesla scam are unprecedented.
While the word 'crash' seem a bit strong here, keep in mind, as the article mention, that Tesla is volountarily redacting all the details. The company was recently found to deliberately lying about the availability of the data in a death case, and ordered to pay >$200 million.
Funny to see the comments here vs the thread the other day where a Waymo hit a child.
There's no real discussion to be had on any of this. Just people coming in to confirm their biases.
As for me, I'm happy to make and take bets on Tesla beating Waymo. I've heard all these arguments a million times. Bet some money
> Tesla beating Waymo
Heard this for a decade now, but I’m sure this year will be different!
I didn't say this year, but lets bet on it?
Nothing says confidence like a prediction with an unspecified timeline.
Propose a bet with concrete details and resolutions so we can bet.
For instance, Would you like to bet 1000 dollars Tesla has more unsupervised self driving robotaxis than Waymo at the start of 2027?
Sure. I'll bet you that in calendar year 2028 Waymo has more paid passenger trips than Tesla. Loser donates $1k USD to Doctors Without Borders.
I'm not a camera only doomer, and expect that in ten years Waymo will also not use Lidar, or that the units will be incredibly cheap and well integrated.
But I think the pro Tesla camp is exaggerating how quickly the march of 9s will happen for them, and underestimating and how quickly Waymo will expand in the next few years.
I am in on this as well and will do 10-1, I will donate $10k for any $1k pledged (would easily do 100-1 as I am sure Tesla will kill robotaxi within next 2 years “to focus on alienrobots on Moon Data Centers”)
I own a Tesla (and subscribe to "FSD", >70% of my miles are FSD without issue). As it stands though, Waymo is by every metric objectively better at "autonomous driving".
I would also love to see every car brand have full autonomous driving. It seems like you think you must be in one camp or another, and that one has to "beat" the other - but that's not true. Both can be successful - wouldn't that be a great world?
They are not comparable. The Waymo incident involved a child who ran out from behind an SUV and into the roadway, directly in front of the Waymo [1].
[1] https://www.fastcompany.com/91491273/waymo-vehicle-hit-a-chi....
Is there any place online to read the incident reports? For example Waymo in CA there's a gov page to read them, I read 9 of them and they were all not at the fault of Waymo, so I'm wondering how many of these crashes are similar (ie at a red light and someone rear ends them)
No, TSLA purposely does not list the details of the incident.
Elon discarded any thoughts of LiDAR years ago. Motor Trend said the lack of LiDAR and reliance on cameras has led to specific, recurring issues that critics argue make it less "practical" for true, unsupervised autonomy.
The source is a well known anti Tesla, anti Musk site, the owner has a psychotic hatred from Tesla and Elon after being a balanced click bait site for years. Ignore.
The source is legally mandated reporting to the government.
Elecktek is just summarizing/commenting.
Electrec is not neutral commenting, it always giving any data an anti-Tesla spin.
I didn’t say they were. They do have a bias. I have the same one.
My comment was aimed at the implication that the data might be untrustworthy because they were the ones reporting it.
So I pointed out it wasn’t their data.
As for “spin“ Elon has been telling us for a long time that FSD is safer than humans and will save lives. We appear to have objective data that counters that narrative.
That seems worth reporting on to me.
Who said the data was untrustworthy, the source of the article is presenting the data in a highly negative light, which it does in 99% of its articles, so it's a worthless website for reporting data of this sort.
It's basically a few light bumps going at snails pace and probably caused by other cars. The articles headline reads as if it mowed down a group of school children.
Every criticism is accompanied by comments attacking the messenger. Is there evidence of Elektrek's supposed bias?
Also keep in mind all of the training and data and advanced image processing has only ever been trained on cities with basically perfect weather conditions for driving (maybe with the exception of fog in San Francisco).
We are still a long, long, long way off for someone to feel comfortable jumping in a FSD cab on a rainy night in in New York.
Comment was deleted :(
ill stick to the bus
One of the Robotaxi “crashes” was actually a moving bus colliding into a stationary Robotaxi.
That's even more convincing. I wouldn't want to be in the RoboTaxi that's getting hit by a bus
He going to fix this by having grok redefine "widespread"
https://www.cnbc.com/2026/01/22/musk-tesla-robotaxis-us-expa...
Tesla CEO Elon Musk said at the World Economic Forum in Davos that the company’s robotaxis will be “widespread” in the U.S. by the end of 2026.
Their service is way worse than you think, in every way. The actual unsupervised Robotaxi service doesn't cover a geofenced area of Austin, like Waymo does. It traverses a fixed route along South Congress Avenue, like a damned bus.
Such slop. First, they take NHTSA SGO "crashes" which explicitly includes basically any physical impact with property damage e.g. 1–2 mph “backed into a pole/tree”.
Then they compare that numerator to Tesla’s own “minor collision” benchmark — which is not police-reported fender benders; it’s a telemetry-triggered “collision event” keyed to airbag deployment or delta-V ≥ 8 km/h. Different definitions. Completely bogus ratio.
Any comparison to police-reported crashes is hilariously stupid for obvious reasons.
On top of that, the denominator is hand-waved ("~800k paid miles extrapolated"), which is extra sketchy because SGO crashes can happen during non-paid repositioning/parking while "paid miles" excludes those segments. And we’re talking 14 events in one geofenced, early rollout in Austin so your confidence interval is doing backflips. If you want a real claim vs humans, do matched Austin exposure, same reportable-crash criteria, severity stratification, and show uncertainty bands.
But what you get instead is clickbait so stop falling for this shit please HN.
This is something Electrek does regularly and isn't unique to this article but I don't like how they suggests the Tesla crash reports are doing something shady by following the reporting guidelines. Tesla is reporting things by the books, and when Electrek doesn't like how the laws are laid out they blame Tesla. Electrek wants Tesla to publish separate press notes, and since they don't they take their frustration out on the integrity of the article, which is worse for everyone.
According to the OP, all other autonomous driving companies publish complete accident reports.
This is (one of) my point(s), Tesla does publish accident reports to all the major government agencies, and then those agencies make them public. Electrek wants a press packet, which Tesla isn't doing for them. In response, they try to make it sound like Tesla is shady and hiding things, or otherwise acting in some nefarious manor nobody else could think to act in. It feels disingenuous and will only serve to hurt autonomous driving optics for all companies
> Tesla does publish accident reports to all the major government agencies, and then those agencies make them public.
Electrek says they aren't made public, if I understand correctly (?). Do you know where the public can access them - do you have any links?
A minor fender-bender is not a crash
4x worse than humans is misleading, I bet it's better than humans, by a good margin.
I agree, and not in defense of Tesla but a 1mph collision while backing is something most human drivers are not going to report anywhere. That's why most cars have little scrapes and scratches on the bumpers and doors. Tesla should be more forthcoming with the full narrative of these incidents though.
Now imagine if all those billions in taxes had been used to build real transit infrastructure instead of subsidizing Tesla.
Your deportation papers for being a communism agitator are on the way.
Well, how about time to take them off the roads then?
TSLA investors don't care (as long as Musk is still there to keep them believing). Years of bad news, and the stock is only 10% off it's all time highs.
Not too bad. It’ll only improve from here and some of the accidents are reversing into poles and what not. Most of which isn’t counted in human accidents.
Waymo is licensing out their "Driver" software to cars that fit the specification
if Tesla drops the ego they could obtain Waymo software and track record on future Tesla hardware
It's a fusion of jazz and funk!
Given how minor these are, you think they'd get in front of the conspiracy by full disclosure.
I spew elon hate every chance I get and I maintain I am being too kind on him.
wait, i thought there were none anywhere? that the promised taxis never arrived?
This data seems very incomplete and potentially misleading.
>The new crashes include [...] a crash with a bus while the Tesla was stationary
Doesn't this imply that the bus driver hit the stationary Tesla, which would make the human bus driver at fault and the party responsible for causing the accident? Why should a human driver hitting a Tesla be counted against Tesla's safety record?
It's possible that the Tesla could've been stopped in a place where it shouldn't have, like in the middle of an intersection (like all the Waymos did during the SF power outage), but there aren't details being shared about each of these incidents by Electrek.
>The new crashes include [...] a collision with a heavy truck at 4 mph
The chart shows only that the Tesla was driving straight at 4mph when this happened, not whether the Tesla hit the truck or the truck hit the Tesla.
Again, it's entirely possible that the Tesla hit the truck, but why aren't these details being shared? This seems like important data to consider when evaluating the safety of autonomous systems - whether the autonomous system or human error was to blame for the accident.
I appreciate that Electrek at least gives a mention of this dynamic:
>Tesla fans and shareholders hold on to the thought that the company’s robotaxis are not responsible for some of these crashes, which is true, even though that’s much harder to determine with Tesla redacting the crash narrative on all crashes, but the problem is that even Tesla’s own benchmark shows humans have fewer crashes.
Aren't these crash details / "crash narrative" a matter of public record and investigations? By e.g. either NHTSA, or by local law enforcement? If not, shouldn't it be? Why should we, as a society, rely on the automaker as the sole source of information about what caused accidents with experimental new driverless vehicles? That seems like a poor public policy choice.
> there aren't details being shared about each of these incidents by Electrek.
> Aren't these crash details / "crash narrative" a matter of public record and investigations?
Per the OP, Tesla doesn't publish the details; all other autonomous driving manufacturers do.
"Tesla remains the only ADS operator to systematically hide crash details from the public through NHTSA’s confidentiality provisions."
Given the way Musk has lied and lied about Tesla's autonomous driving capabilities, that can't be much of a surprise to anyone.
at this point much bigger news story might be “robotaxi made it from point A to point B in a straight line on a vacant parking lot (supervised)”
Honestly I thought everyone was clear how this was going to go after the initial decapitation from 2016, but it seems like everyone's gonna allow these science experiments to keep causing damage until someone actually regulates them with teeth.
Just imagine how bad it is going to be when they take the human driver out of the car.
No idea how these things are being allowed on the road. Oh wait, yes I do. $$$$
[dead]
You are absolutely correct. We need context and nuance but there simply is none in this data. It is clickbait. "Crashing" in this report is defined as "an object touched the car". The highest speed is 27MPH with "an animal" (turtle, cat, pigeon, we have no idea), while the lowest speed crashes are at -0- MPH. Yes, they're including five events where something hit the car while it was not moving.
[dead]
[dead]
[flagged]
[flagged]
[flagged]
It's amazing how well Waymo functions as a shibboleth for people who don't understand how these systems work. Waymo's remote assistants provide high level guidance, they're not part of the control loop.
Supposedly neither are Tesla's remote assistants, though there are open questions about why they've posted job descriptions about building a teleop system for their vehicles [0] and why their remote assistant setups have steering wheels if that's completely true.
[0] https://web.archive.org/web/20241211115851/https://www.tesla...
At this point, I am really sick of both Elon supporters and Elon haters, coverage of Elon's companies either good or bad (as it's always incredibly biased in either direction), and sick of both the current trend of hyper optimism and hyper doomerism.
I know that it is irrational to expect any kind of balance or any kind of objective analysis, but things are so polarized that I often feel the world is going insane.
[flagged]
[flagged]
Good, who cares. Autonomous driving is an absolute waste of time. We need autodrone transport for civilian traffic. The skies have been waiting.
In before, 'but it is a regulation nightmare...'
It is safety, regulatory and noise nightmare.
As opposed to what, nuclear energy, airplane traffic, people directing 2 ton vehicles?
Get over your bs.
Crafted by Rajat
Source Code