Elon Musk doesn’t need Tesla to be simply an automaker. He desires Tesla to be an AI firm, one which’s found out the best way to make vehicles drive themselves.
Essential to that mission is Dojo, Tesla’s custom-built supercomputer designed to coach its Full Self-Driving (FSD) neural networks. FSD isn’t truly totally self-driving; it may possibly carry out some automated driving duties, however nonetheless requires an attentive human behind the wheel. However Tesla thinks with extra information, extra compute energy and extra coaching, it may possibly cross the brink from nearly self-driving to full self-driving.
And that’s the place Dojo is available in.
Musk has been teasing Dojo for a while, however the government ramped up discussions in regards to the supercomputer all through 2024. Now that we’re in 2025, one other supercomputer known as Cortex has entered the chat, however Dojo’s significance to Tesla may nonetheless be existential — with EV gross sales slumping, buyers need assurances that Tesla can obtain autonomy. Beneath is a timeline of Dojo mentions and guarantees.
Table of Contents
2019
First mentions of Dojo
April 22 – At Tesla’s Autonomy Day, the automaker had its AI group onstage to speak about Autopilot and Full Self-Driving, and the AI powering them each. The corporate shares details about Tesla’s custom-built chips which can be designed particularly for neural networks and self-driving vehicles.
In the course of the occasion, Musk teases Dojo, revealing that it’s a supercomputer for coaching AI. He additionally notes that each one Tesla vehicles being produced on the time would have all {hardware} obligatory for full self-driving and solely wanted a software program replace.
2020
Musk begins the Dojo roadshow
Feb 2 – Musk says Tesla will quickly have greater than one million related autos worldwide with sensors and compute wanted for full self-driving — and touts Dojo’s capabilities.
“Dojo, our coaching supercomputer, will be capable to course of huge quantities of video coaching information & effectively run hyperspace arrays with an enormous variety of parameters, loads of reminiscence & ultra-high bandwidth between cores. Extra on this later.”
August 14 – Musk reiterates Tesla’s plan to develop a neural community coaching pc known as Dojo “to course of really huge quantities of video information,” calling it “a beast.” He additionally says the primary model of Dojo is “about a year away,” which might put its launch date someplace round August 2021.
December 31 – Elon says Dojo isn’t needed, however it is going to make self-driving higher. “It isn’t sufficient to be safer than human drivers, Autopilot in the end must be greater than 10 occasions safer than human drivers.”
2021
Tesla makes Dojo official
August 19 – The automaker formally broadcasts Dojo at Tesla’s first AI Day, an occasion meant to draw engineers to Tesla’s AI group. Tesla additionally introduces its D1 chip, which the automaker says it is going to use — alongside Nvidia’s GPU — to energy the Dojo supercomputer. Tesla notes its AI cluster will home 3,000 D1 chips.
October 12 – Tesla releases a Dojo Technology whitepaper, “a information to Tesla’s configurable floating level codecs & arithmetic.” The whitepaper outlines a technical normal for a brand new kind of binary floating-point arithmetic that’s utilized in deep studying neural networks and may be applied “completely in software program, completely in {hardware}, or in any mixture of software program and {hardware}.”
2022
Tesla reveals Dojo progress
August 12 – Musk says Tesla will “phase in Dojo. Gained’t want to purchase as many incremental GPUs subsequent 12 months.”
September 30 – At Tesla’s second AI Day, the corporate reveals that it has put in the primary Dojo cupboard, testing 2.2 megawatts of load testing. Tesla says it was constructing one tile per day (which is made up of 25 D1 chips). Tesla demos Dojo onstage operating a Secure Diffusion mannequin to create an AI-generated picture of a “Cybertruck on Mars.”
Importantly, the corporate units a goal date of a full Exapod cluster to be accomplished by Q1 2023, and says it plans to construct a complete of seven Exapods in Palo Alto.
2023
A ‘long-shot wager‘
April 19 – Musk tells buyers throughout Tesla’s first-quarter earnings that Dojo “has the potential for an order of magnitude enchancment in the price of coaching,” and likewise “has the potential to turn into a sellable service that we might supply to different corporations in the identical manner that Amazon Internet Companies presents net companies.”
Musk additionally notes that he’d “have a look at Dojo as sort of a long-shot wager,” however a “wager price making.”
June 21 – The Tesla AI X account posts that the corporate’s neural networks are already in buyer autos. The thread features a graph with a timeline of Tesla’s present and projected compute energy, which locations the beginning of Dojo manufacturing at July 2023, though it’s not clear if this refers back to the D1 chips or the supercomputer itself. Musk says that very same day that Dojo was already on-line and operating duties at Tesla information facilities.
The corporate additionally initiatives that Tesla’s compute would be the high 5 in all the world by round February 2024 (there are not any indications this was profitable) and that Tesla would attain 100 exaflops by October 2024.
July 19 – Tesla notes in its second-quarter earnings report it has began manufacturing of Dojo. Musk additionally says Tesla plans to spend greater than $1 billion on Dojo by 2024.
September 6 – Musk posts on X that Tesla is restricted by AI coaching compute, however that Nvidia and Dojo will repair that. He says managing the information from the roughly 160 billion frames of video Tesla will get from its vehicles per day is extraordinarily troublesome.
2024
Plans to scale
January 24 – Throughout Tesla’s fourth-quarter and full-year earnings name, Musk acknowledges once more that Dojo is a high-risk, high-reward mission. He additionally says that Tesla was pursuing “the twin path of Nvidia and Dojo,” that “Dojo is working” and is “doing coaching jobs.” He notes Tesla is scaling it up and has “plans for Dojo 1.5, Dojo 2, Dojo 3 and whatnot.”
January 26 – Tesla introduced plans to spend $500 million to construct a Dojo supercomputer in Buffalo. Musk then downplays the funding considerably, posting on X that whereas $500 million is a big sum, it’s “solely equal to a 10k H100 system from Nvidia. Tesla will spend greater than that on Nvidia {hardware} this 12 months. The desk stakes for being aggressive in AI are at the least a number of billion {dollars} per 12 months at this level.”
April 30 – At TSMC’s North American Know-how Symposium, the corporate says Dojo’s next-generation coaching tile — the D2, which places all the Dojo tile onto a single silicon wafer, slightly than connecting 25 chips to make one tile — is already in manufacturing, in response to IEEE Spectrum.
Could 20 – Musk notes that the rear portion of the Giga Texas manufacturing facility extension will embody the development of “an excellent dense, water-cooled supercomputer cluster.”
June 4 – A CNBC report reveals Musk diverted hundreds of Nvidia chips reserved for Tesla to X and xAI. After initially saying the report was false, Musk posts on X that Tesla didn’t have a location to ship the Nvidia chips to show them on, because of the continued building on the south extension of Giga Texas, “so they might have simply sat in a warehouse.” He famous the extension will “home 50k H100s for FSD coaching.”
He additionally posts:
“Of the roughly $10B in AI-related expenditures I mentioned Tesla would make this 12 months, about half is inner, primarily the Tesla-designed AI inference pc and sensors current in all of our vehicles, plus Dojo. For constructing the AI coaching superclusters, NVidia {hardware} is about 2/3 of the price. My present greatest guess for Nvidia purchases by Tesla are $3B to $4B this 12 months.”
July 1 – Musk reveals on X that present Tesla autos could not have the appropriate {hardware} for the corporate’s next-gen AI mannequin. He says that the roughly 5x improve in parameter rely with the next-gen AI “may be very troublesome to realize with out upgrading the automobile inference pc.”
Nvidia provide challenges
July 23 – Throughout Tesla’s second-quarter earnings name, Musk says demand for Nvidia {hardware} is “so excessive that it’s typically troublesome to get the GPUs.”
“I believe this due to this fact requires that we put much more effort on Dojo in an effort to make sure that we’ve acquired the coaching functionality that we want,” Musk says. “And we do see a path to being aggressive with Nvidia with Dojo.”
A graph in Tesla’s investor deck predicts that Tesla AI coaching capability will ramp to roughly 90,000 H100 equal GPUs by the top of 2024, up from round 40,000 in June. Later that day on X, Musk posts that Dojo 1 may have “roughly 8k H100-equivalent of coaching on-line by finish of 12 months.” He additionally posts photos of the supercomputer, which seems to make use of the identical fridge-like chrome steel exterior as Tesla’s Cybertrucks.
From Dojo to Cortex
July 30 – AI5 is ~18 months away from high-volume manufacturing, Musk says in a reply to a publish from somebody claiming to begin a membership of “Tesla HW4/AI4 homeowners indignant about getting left behind when AI5 comes out.”
August 3 – Musk posts on X that he did a walkthrough of “the Tesla supercompute cluster at Giga Texas (aka Cortex).” He notes that it might be made roughly of 100,000 H100/H200 Nvidia GPUs with “huge storage for video coaching of FSD & Optimus.”
August 26 – Musk posts on X a video of Cortex, which he refers to as “the enormous new AI coaching supercluster being constructed at Tesla HQ in Austin to resolve real-world AI.”
2025
No updates on Dojo in 2025
January 29 – Tesla’s This autumn and full-year 2024 earnings name included no point out of Dojo. Cortex, Tesla’s new AI coaching supercluster on the Austin gigafactory, did make an look, nevertheless. Tesla famous in its shareholder deck that it accomplished the deployment of Cortex, which is made up of roughly 50,000 H100 Nvidia GPUs.
“Cortex helped allow V13 of FSD (Supervised), which boasts main enhancements in security and luxury because of 4.2x improve in information, increased decision video inputs … amongst different enhancements,” in response to the letter.
In the course of the name, CFO Vaibhav Taneja famous that Tesla accelerated the buildout of Cortex to hurry up the rollout of FSD V13. He mentioned that accrued AI-related capital expenditures, together with infrastructure, “to this point has been roughly $5 billion.” In 2025, Taneja mentioned he expects capex to be flat because it pertains to AI.
This story initially revealed August 10, 2024, and we’ll replace it as new info develops.