NVIDIA Enhances AI Accessibility with Cloud-Based Solutions
In Brief
NVIDIA is partnering with leading cloud service providers to offer AI as a service.
Users will gain access to NVIDIA's powerful AI supercomputer, as well as acceleration libraries and pre-trained generative AI models, all available as a cloud service.
The NVIDIA DGX™ AI supercomputer can be accessed via NVIDIA DGX Cloud, which is already available on Oracle Cloud Infrastructure, and services like Microsoft Azure are set to join soon. Google Cloud Platform coming soon .

On Wednesday, NVIDIA, a leader in AI computing technologies, introduced a new initiative aimed at providing Artificial Intelligence as a Service (AIaaS) through collaborations with significant cloud service providers.
This innovative service gives enterprise clients the opportunity to utilize NVIDIA's top-tier AI platform, which encompasses an AI supercomputer, software for acceleration, libraries, and pre-trained generative AI models.
Additionally, customers will be able to interact with various layers of NVIDIA AI directly from their browsers. The NVIDIA DGX AI supercomputer is available via the NVIDIA DGX Cloud platform, currently on Oracle Cloud, with Microsoft Azure, Google Cloud Platform, and other cloud services expected to follow suit shortly.
“We’re at a pivotal moment for AI, as it’s poised for widespread adoption across all sectors,”
noted Jensen Huang, the founder and CEO of NVIDIA, during a statement. press release .
“From small startups to large enterprises, we’re witnessing a surge in interest regarding the adaptability and capabilities of generative AI technologies. We're here to empower customers to harness the advancements in generative AI and large-scale systems. Our new AI supercomputer, featuring the H100 and its Transformer Engine along with Quantum-2 networking, is fully operational,” he added. language models Those utilizing NVIDIA's AI service will access two tiers of the NVIDIA AI platform. The first is the software layer, allowing them to leverage NVIDIA AI Enterprise for training and deploying large-scale models.
The second tier is designed as an AI-model-as-a-service, where users can customize NVIDIA’s NeMo and BioNeMo models to create tailored generative AI solutions that serve their business needs. In recent years, NVIDIA has sharpened its focus on specialized AI chips and services, addressing the increasing demand for AI applications, particularly with the boom of generative services like ChatGPT creating new avenues for AI chip innovation. language models or other AI workloads.
Among NVIDIA’s standout AI chips is the Tensor Processing Unit (TPU), crafted specifically for AI tasks. TPUs excel at executing vast numbers of mathematical operations concurrently, a necessity for training complex deep learning models. Another significant development from NVIDIA is the Jetson range of embedded systems, tailored for edge computing scenarios. Jetson devices are compact, energy-efficient computers designed for integration into robots, drones, and similar technology, enabling intelligent features such as object detection, recognition, and autonomous navigation. .
Beyond hardware, focused on developing NVIDIA also extends a suite of AI services, including the NVIDIA Deep Learning Institute (DLI), which trains and certifies developers, researchers, and data scientists eager to enhance their AI expertise.
Additionally, they roll out multiple cloud-based AI services, such as the NVIDIA GPU Cloud (NGC), giving developers access to pre-configured deep learning models and essential software tools. deep learning applications However, it’s crucial to remember that the information here is not legal, tax, investment, financial advice, or anything of that nature. Always invest only what you can afford to lose, and consult a qualified financial advisor if you're uncertain. For more details, refer to the terms, conditions, and support pages provided by the relevant issuer or advertiser. MetaversePost strives for accurate and impartial reporting, but be aware that market conditions can change without prior notice. neural networks .
Cindy, a journalist at Metaverse Post, writes about various aspects including web3, NFTs, the metaverse, and AI, concentrating on interviews with key players in the Web3 space. Having conversed with over 30 executives at the C-level, she brings valuable insights to her readers. Originally hailing from Singapore, Cindy is now residing in Tbilisi, Georgia. She earned her Bachelor’s degree in Communications & Media Studies from the University of South Australia and boasts a decade of experience in journalism and writing.
with press releases, announcements, and opportunities for interviews. Blum Commemorates One Year Milestone by Winning ‘Best GameFi App’ and ‘Best Trading App’ at Blockchain Forum 2025 Addressing the Challenges of DeFi Fragmentation: A Look at How Omniston is Enhancing Liquidity on TON
NVIDIA also Vanilla Introduces 10,000x Leverage Super Perpetuals on the BNB Chain Solv Protocol, Fragmetric, and Zeus Network Collaborate to Launch FragBTC: Solana's Native Bitcoin Yield-Generating Solution
Disclaimer
In line with the Trust Project guidelines Polygon Unveils ‘Agglayer Breakout Program’ to Foster Innovation and Provide Airdrop Rewards for POL Stakers