Nvidia Unveils Next-Generation AI Chips and Software
With Nvidia already holding over 80% of the AI chip market and experiencing significant valuation growth since the onset of the AI boom, the technology giant is embarking on a new release that is expected to further enhance its dominance in the industry.
The global semiconductor leader has recently unveiled its new generation of artificial intelligence (AI) chips and software tailored for enterprises and cloud providers aiming to enhance computing power for large language models and AI applications at its annual GPU Technology Conference (GTC) 2024.
Dubbed Blackwell, the family of new-generation AI chips is promising significantly better AI computing performance and better energy efficiency that is expected to deliver more than double the AI performance of previous models.
Sign up for Techloy
Techloy.com publishes information about companies, products, careers, and funding in the technology industry across emerging markets globally.
No spam. Unsubscribe anytime.
Boasting up to 30 times speed over previous models, the Blackwell-architecture GPUs are packed with 208 billion transistors and are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
The Blackwell GPUs offer substantial performance boosts, enabling AI companies to train larger and more complex models, with features specifically designed to support technologies like ChatGPT.
This capability has attracted considerable interest from cloud service providers such as Amazon, Google, Microsoft, and Oracle, who are in high demand for chips capable of training and deploying their large AI models. The first Blackwell chip, GB200, is set to be released later this year.
In addition to the Blackwell GPUs, Nvidia introduced generative AI microservices during the conference. Built on the Nvidia CUDA platform, these microservices enable enterprises to develop and deploy custom applications seamlessly across Nvidia's GPUs.
Also included in these microservices is Nvidia's NIM (NeMo Inference Microservices), which simplifies the deployment of AI models on older Nvidia GPUs for inference tasks.
Overall, Nvidia's latest innovations aim to meet the increasing demand for powerful AI hardware and software solutions, solidifying its position as a leader in the AI industry.