No wonder Musk hates Altman…

Tesla has been developing self-driving cars for a long time. The first Model Ss with "Autopilot" rolled out of the factory a decade ago. Their more advanced FSD first reached customers over five years ago. Yet, even after all this time and billions of dollars spent, these systems still suck. Third-party data shows that even the latest versions of FSD can only travel 493 miles between critical disengagements. However, the actual figure is likely far worse, as the data also shows that FSD customers distrust the system so much that they only use it 15% of the time! Tesla's soft launch of its Robotaxi service demonstrated this woeful lack of safety, as within a few days, the vehicles had been spotted egregiously violating traffic laws and driving dangerously multiple times. It feels like Tesla is going nowhere and is just smacking its head against a brick wall. Surely, FSD will work as promised eventually, right? Well, not according to a research paper from OpenAI…

OpenAI does more than fail to replace your job, destroy the internet with its brainrot slop and force our financial institutions into an economy-crushing bubble. Behind all the bullshit hype, they have a dedicated team of top-notch AI scientists doing brilliant research. Interestingly, their latest paper is the pin that could pop the AI bubble.

These scientists were trying to find a way to stop AI "hallucinating". I hate that term. It anthropomorphises a dead machine by rebranding its errors, which reinforces the mass pareidolia psychosis that makes us all believe this box of probability is even remotely intelligent. For example, METR has found that AI programming tools actually slow down developers, as they make recurrent and strange errors (hallucinations), which means developers have to spend so much time debugging that it would have been faster to just write the code themselves. If AI companies can't get rid of these kinds of errors, their tools are useless, and their entire business is worthless.

This makes the findings of this paper utterly damning, as they demonstrate that hallucinations are a core element of generative AI technology and can't be fixed or reduced from their current levels by simply adding more data and computing power to these models (which is OpenAI and the entire AI industry's current strategy). This really isn't that surprising. Generative AI is just a probability engine; it isn't a thinking thing. As such, it will always have a probability of making mistakes. This is why these scientists also found that "reasoning models", which use a prompt modifier to break your prompt into multiple sections to attempt to get more accurate results from the AI, actually make hallucinations worse! By breaking up a single prompt into multiple prompts, there is just more opportunity for these errors to cock things up.

Those who have been paying attention to the AI world have known this for a while now. We have known about the efficient compute frontier, which explains how AI experiences seriously diminishing returns, for years (read more here).

Okay, so what has this got to do with Tesla's FSD?

Well, you might not realise it, but FSD is in fact comprised of two generative AIs. It takes an input from camera feeds (and only camera feeds) and generates a model of the area around the car using AI computer vision, which it then uses as an input for a self-driving AI to generate control inputs for the car.

As a side note, FSD proves my point about AI "hallucinations" being a terrible PR phrase. When FSD gets things wrong, we don't call them "hallucinations" because the car crashes or violates traffic laws — which is something we don't want to anthropomorphise — so we call them errors.

But did you catch that? FSD is just two, totally unsupported generative AI models working together. This entire system is designed around the completely false notion that generative AI can become 100% accurate and error-free. There is nothing to identify and mitigate errors (hallucinations).

Almost every self-driving company knows this. This is why they use multiple sensor types, run several AIs, and give constraints to their AIs to mitigate these kinds of errors. Lidar, radar and ultrasonic sensors are used to verify and correct the computer vision understanding of the world around the car. Separate systems run radar and ultrasonic sensors to detect potential impacts and override the AI to brake and prevent an accident. GPS data and highly detailed 3D maps of the operational area are used not just to help the AI understand what it should do, but also to constrain the possible actions it can take. While these redundant systems are not enough to make a self-driving car as safe as a human driver, they do catch and mitigate almost all AI errors (hallucinations).

Tesla used to do something similar, given that cars before 2022 had radar and ultrasonic sensors. These were absolutely not enough to catch the majority of these errors, but at least it was something. However, Musk forced Tesla to ditch them in favour of a camera-only approach, despite his engineers' warning him against the move (read more here).

This is why FSD is a dead end. Its entire concept, construction, architecture, ethos, and development have been predicated on the idea that generative AI will soon be nearly 100% reliable. Indeed, Musk has suggested numerous times that all they need is more data to make FSD unbreakably reliable and that the vast amount of data they have collected from Tesla drivers will allow them to reach this goal. This research paper blasts an exploding Starship-shaped hole through that narrative.

What does this mean for the future of Tesla? FSD was supposed to be their future. What does this mean for the credibility of Musk's leadership? The entire value of Tesla is based on the notion that he knows what he is doing with AI. I trust I do not need to fill in the blanks here.

Thanks for reading! Don't forget to follow me on YouTube, Bluesky, and Instagram, or support me over at Substack.

(Originally published on PlanetEarthAndBeyond.co)

Sources: OpenAI, METR, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett, Will Lockett