The projected expenditure for GPT-5's training is around $2.5 billion, with activities set to begin next year.

Today, Twitter user Martin Shkreli from New York tweeted that training GPT-5 is anticipated to demand $2.0 to $2.5 billion, involving the deployment of 500,000 H100 Tensor Core GPUs over a 90-day period or through alternative configurations. The training is slated to commence in the upcoming year.
OpenAI is actively working on Enhancements are being made to GPT-4, introducing a variety of features. These enhancements include capabilities like embodiment, agency, Socratic reasoning, creating knowledge graphs, developing world models, integrating multimodality, strategic planning, improving semantic interpretability, fostering hive minds, and managing control and limitations, alongside addressing smaller, high-priority tasks.
The production scale of H100 and A100 GPUs raises some concerns. Will the necessary supply of these GPUs be sufficient for this significant venture? It’s estimated that around one million H100s will be produced by year-end, with an anticipated output of five million units for the following year.
When discussing costs, it's crucial to consider the GPU prices. Counting these GPUs as part of training expenses could be misleading, since they retain their usability beyond the training phase. The total value of these GPUs could reach up to $20 billion.
It's noteworthy that the chip manufacturer Sustainable Metal Cloud (SMC) has maxed out its H100 production capacity at 15,000 units per month. However, they've increased their output to nearly 50,000 units monthly.
Looking at electricity costs, these represent a relatively minor fraction of the total computational expenses. To illustrate, using 6 million kWh would equate to about $1 million.
Related : The AI sector's growth potential might soon rival that of national electricity consumption. |
Securing 500,000 H100 GPUs before the next year looks to be an ambitious target, even with Microsoft’s backing. Furthermore, it raises concerns about whether the training will indeed demand as much computational power as suggested. cost of inference Reflecting on 2023's performance, it's impressive to note their success has reportedly tripled, exceeding a valuation of $1 trillion, primarily due to the skyrocketing usage of Nvidia chips in AI technologies. However, it's essential to acknowledge that current restrictions on exporting high-end AI chips to China could influence production and training expenses.
In the context of Nvidia’s market Nvidia is reaping nearly a thousandfold profit margin for every H100 GPU accelerator it sells, according to insights from Barron’s contributor Tae Kim. The market price for each HPC accelerator ranges from $25,000 to $30,000, which covers approximately $3,320 for the chip and its related components. While the cost structure remains ambiguous, it's believed to revolve around manufacturing expenses. Additionally, Nvidia's research and development costs are significant, as the creation of chips like the H100 involves extensive labor from skilled professionals. Interestingly, Nvidia's AI-optimized products are sold out through 2024, with projections suggesting that the AI accelerator industry could reach a valuation of about $150 billion by 2027. U.S. export constraints The company's well-established infrastructure and product range are beneficial, yet financial constraints and opportunity costs might restrict investments in other ventures or limit risk-taking in research and development.
The onset of the AI revolution is merely beginning: Could GPT-5 pave the way towards achieving true Artificial General Intelligence? Please be advised that the information presented on this page should not be construed as legal, tax, investment, financial, or any other form of advice. It's crucial to invest only what you can afford to lose and seek independent financial counsel if needed. For more details, we recommend reviewing the terms and conditions along with the help and support sections offered by the issuer or advertiser. MetaversePost is dedicated to delivering accurate and impartial news coverage, but be aware that market conditions can fluctuate without prior notice.
Damir leads the team as the product manager and editor at Metaverse Post, focusing on domains such as AI/ML, AGI, LLMs, the Metaverse, and Web3 topics. His articles engage over a million readers each month, highlighting his expertise developed over a decade in SEO and digital marketing. Damir has been featured in notable publications like Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, and BeInCrypto. As a digital nomad, he moves between the UAE, Turkey, Russia, and the CIS. With a bachelor's degree in physics, Damir attributes his critical thinking skills to his educational background, aiding him in navigating the rapidly evolving digital landscape.
Disclaimer
In line with the Trust Project guidelines Addressing DeFi fragmentation: Discover how Omniston enhances liquidity on the TON network.