Dojo de Tesla, a timeline |  britcommerce- BC

Dojo de Tesla, a timeline | britcommerce– BC

Elon Musk doesn’t want Tesla to be just a car manufacturer. He wants Tesla to be an AI company, one that has discovered how to make cars lead themselves.

It is crucial for that mission is Dojo, the Tesla personalized supercomputer designed to train its autonomous neuronal networks (FSD). FSD is not really autonomous; You can perform some automated driving tasks, but still requires an attentive human behind the steering wheel. But Tesla thinks with more data, more calculation power and more training, can cross the threshold of almost autonomous to self -styled.

And that’s where Dojo enters.

Musk has been causing Dojo for some time, but the Executive increased discussions on the supercomputer over 2024. Now that we are in 2025, another supercomputer called Cortex has entered the chat, but the importance of Dojo for Tesla could still Being existential, with EV EV sales, investors want guarantees that Tesla can achieve autonomy. Below is a timeline of the mentions and promises of Dojo.

2019

First mentions of the Dojo

April 22 – On the Autonomy Day of Tesla, the car manufacturer had his AI team on stage to talk about the autopilot and complete autonomous driving, and the AI ​​that promoted them both. The company shares information about Tesla custom chips that are specifically designed for neuronal networks and autonomous cars.

During the event, Musk makes fun of Dojo, revealing that it is a supercomputer to train AI. He also points out that all Tesla cars that are occurring at that time would have all the necessary hardware for complete autonomous driving and only needed a software update.

2020

The musk begins the Dojo Roadshow

February 2 – Musk says Tesla will soon have more than one million vehicles connected worldwide with sensors and computation necessary for complete autonomous driving, and promotes Dojo’s capabilities.

“Dojo, our training supercomputer, can process large amounts of video training data and efficiently execute hyperspace matrices with a large amount of parameters, much memory and ultra high bandwidth between nuclei. More about this later. “

August 14 The musk reiterates Tesla’s plan to develop a neuronal network training computer called Dojo “to process a large amount of video data”, calling it “a beast”. He also says that the first version of Dojo is “A year away” Which would put its release date somewhere around August 2021.

December 31 Elon says Dojo is not necessaryBut self -pity will improve. “It is not enough to be safer than human drivers, the autopilot finally needs to be more than 10 times safer than human drivers.”

2021

Tesla makes dojo official

August 19 – The car manufacturer officially announces to Dojo on the first day of the Tesla, an event destined to attract engineers to the Tesla AI team. Tesla also presents its D1 chip, which the automobile manufacturer says that it will use, together with the NVIDIA GPU, to feed the Dojo supercomputer. Tesla points out that his AI group will house 3,000 D1 chips.

October 12 – Tesla Librarations to Dojo Technology Whitepaper“A guide for configurable floating point formats and arithmetic.” The technical document describes a technical standard for a new type of floating point binary arithmetic that is used in neuronal networks of deep learning and can be implemented “completely in software, completely in hardware or in any combination of software and hardware.”

2022

Tesla reveals the progress of the Dojo

August 12 – Musk says that Tesla will do it “Phase in Dojo. You will not need to buy so many incremental GPUs next year. “

September 30 – On the second day of Ia de Tesla, the company reveals that it has installed the first dojo cabinet, testing 2.2 megawatts of load tests. Tesla says he was building a mosaic per day (which is composed of 25 D1 chips). Tesla demo on the stage that executes a stable diffusion model to create an image generated by the “cybertruck on Mars”.

It is important to note that the company establishes an objective date of a complete Exapod cluster that will be completed in the first quarter of 2023, and says it plans to build a total of seven Exappods in Palo Alto.

2023

A ‘long shot bet

April 19 – Musk tells investors during the profits of the first Tesla quarter that Dojo “has the potential of an order improvement in the cost of training”, and also “has the potential to become a shade service that we would offer to Other companies in the same way as Amazon Web Services offers web services. “

Musk also points out that “I would see Dojo as a kind of long shot bet”, but a “bet that is worth doing.”

June 21 The Tesla Ai X account Posts that the company’s neural networks are already in customer vehicles. The thread includes a chart with a timeline of the current and projected computing power of Tesla, which places the beginning of the production of Dojo in July 2023, although it is not clear if this refers to the D1 chips or the supercomputer per se. Musk says That same day that Dojo was already online and executed tasks in Tesla Data Centers.

The company also projects that Tesla’s Compute will be the first five worldwide around February 2024 (there are no indications that this was successful) and that Tesla would reach 100 exams in October 2024.

July 19 – Tesla grades In his profit report of the second quarter, the production of Dojo has begun. Musk also says that Tesla plans to spend more than $ 1 billion in Dojo until 2024.

September 6 – Musk publications in x That Tesla is limited by the IA training calculation, but that Nvidia and Dojo will fix it. He says that managing the data of the approximately 160 billion video paintings that Tesla obtains from his cars per day is extremely difficult.

2024

Plans to climb

January 24 – During the gain call of the fourth quarter and all year of Tesla, Musk recognizes again that Dojo is a high -risk project and high reward. He also says that Tesla was following “the double path of Nvidia and Dojo”, that “Dojo is working” and is “doing training work.” He points out that Tesla is expanding it and has “plans for Dojo 1.5, Dojo 2, Dojo 3 and other things.”

January 26 – Tesla announced plans to spend $ 500 million to build a dojo supercomputer in Buffalo. Musk then minimizes the investment a bit, Publication in x That, although $ 500 million is a large sum, is “only equivalent to a 10K H100 Nvidia system.” Tesla will spend more than that on Nvidia Hardware this year. The bets of the table for being competitive in AI are at least several billion dollars per year at this time. “

April 30 – In the North American Technology Symposium of TSMC, the company says that the next -generation training mosaic of Dojo, the D2, which places the entire Dojo mosaic in a single Silicon Make a mosaic, is already in production, according to IEEE Spectrum.

May 20 – Musk grades That the rear portion of the Giga Texas factory will include the construction of “a super dense and water -cooled supercomputer cluster.”

June 4 – TO CNBC report Reveals the musk deviated thousands of Nvidia chips reserved for Tesla to X and XAI. After saying initially, the report was false, Musk publications in x That Tesla did not have a location to send the Nvidia chips to light them, due to the continuous construction of the southern extension of Giga Texas, “so they would have sat in a warehouse.” He pointed out that the extension “will house 50k H100 for FSD training.”

He too Posts:

“Of the approximately $ 10b in AI expenses, I said that Tesla would do this year, approximately half is internal, mainly the computer and inference sensors of the IA designed by Tesla present in all our cars, more dojo. To build AI training superclusters, NVIDIA hardware is approximately 2/3 of cost. My best current assumption for NVIDIA purchases for Tesla are $ 3B to $ 4B this year. ”

July 1 – Musk reveals in x It is possible that current Tesla vehicles do not have adequate hardware for the company’s next generation AI model. He says that the increase of approximately 5x in the parameter count with the next -generation AI “is very difficult to achieve without updating the vehicle inference computer.”

NVIDIA supply challenges

July 23 – During the gain call of the second quarter of Tesla, Musk says that Nvidia hardware demand is “so high that it is often difficult to obtain GPUs.”

“Therefore, I think this requires that we make much more effort in Dojo to make sure we have the training capacity we need,” says Musk. “And we see a path to be competitive with Nvidia with Dojo.”

A graph at the Tesla investors deck predicts that Tesla AI’s training capacity will increase approximately 90,000 GPU H100 equivalent at the end of 2024, compared to around 40,000 in June. Later that day in X, Musk publications That dojo 1 will have “approximately 8K H100 online training equivalent.” He also publishes photos of the supercomputer, which seems to use the same refrigerator stainless steel exterior as Tesla cybertrucks.

From Dojo to Cortex

July 30 – Ai5 is ~ 18 months away from high volume production, says Musk in a reply To a publication of someone who claims to start a “owners of Tesla HW4/AI4 angry for staying back when AI5”.

August 3 – Musk publications in x that made a tutorial of “The Supercompute Cluster of Tesla in Giga Texas (also known as Cortex)”. He points out that it would be made approximately 100,000 GPU NVIDIA of 100,000 H100/H200 with “mass storage for FSD and Optimus video training.”

August 26 – Musk publications in x A Cortex video, which it refers to as “the new giant training supercluster that is being built in Tesla HQ in Austin to solve the AI ​​of the real world.”

2025

There are no updates on the dojo in 2025

January 29 – The fourth quarter of Tesla and the Gain call of 2024 of Tesla included any mention of the Dojo. However, Cortex, the new Tesla’s training supercluster in Austin Gigafactory, made an appearance. Tesla noticed in his Shareholders which completed the implementation of Cortex, which consists of approximately 50,000 GPU H100 NVIDIA.

“Cortex helped enable V13 of FSD (supervised), which has great improvements in safety and comfort thanks to the increase of 4.2x in the data, video inputs of higher resolution … among other improvements,” according to the letter.

During the call, CFO Vaibhav Taneja said that Tesla accelerated the construction of the cortex to accelerate the deployment of FSD V13. He said that capital expenses related to accumulated AI, including infrastructure, “so far have been approximately $ 5 billion.” In 2025, Taneja said that Capex is flat in relation to AI.

This story was originally published on August 10, 2024, and we will update it as new information is developed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top