Nvidia’s ambitious future is to bring AI to every industry where GPU technology can be used.

Why is it important: During the GTC 2023 keynote, Nvidia CEO Jensen Huang spoke about next-gen breakthroughs to bring AI to all industries. In partnership with tech giants such as Google, Microsoft, and Oracle, Nvidia is making headway in AI training, deployment, semiconductors, software libraries, systems, and cloud services. Other announced partnerships and developments include companies such as Adobe, AT&T, and automaker BYD.
Huang marked Numerous examples of the Nvidia ecosystem in action, including Microsoft 365 and Azure users accessing a platform to create virtual worlds, and Amazon using simulation to train autonomous warehouse robots. He also cited the rapid growth of generative AI services such as ChatGPT, calling their success “the AI moment for the iPhone.”
Based on the Nvidia Hopper architecture, Huang has announced a new H100 NVL GPU that runs in a dual GPU configuration with NVLink to meet the growing demand for AI and Large Language Model (LLM) inference. The GPU is equipped with a Transformer Engine designed to process models such as GPT, which reduces LLM processing costs. The company claims that compared to the HGX A100 for GPT-3 processing, a server with four pairs of H100 NVLs can run up to 10 times faster.
As cloud computing becomes a $1 trillion industry, Nvidia has developed the Arm-based Grace processor for AI and cloud workloads. The company claims 2x the performance of x86 processors for the same power consumption in key data center applications. The Grace Hopper superchip then combines the Grace CPU and Hopper GPU to process gigantic datasets typically found in AI databases and large language models.
In addition, the Nvidia CEO claims that their DGX H100 platform with eight Nvidia H100 GPUs has become a model for building AI infrastructure. Several major cloud providers, including Oracle Cloud, AWS, and Microsoft Azure, have announced plans to use H100 GPUs in their offerings. Server manufacturers such as Dell, Cisco, and Lenovo are also releasing systems based on Nvidia H100 GPUs.
As generative AI models are clearly in vogue, Nvidia is offering new hardware products with specific use cases to make inference platforms run more efficiently. The new L4 Tensor Core GPU is a versatile video-optimized accelerator that delivers 120x better AI video performance and 99% better power efficiency than CPUs, while the L40 for image generation is optimized for performance. with AI-enabled graphics and 2D graphics. , video and 3D imaging.
Also read: Has Nvidia conquered the AI training market?
Nvidia’s Omniverse is also present in the modernization of the automotive industry. By 2030, the industry will see a shift towards electric vehicles, new factories and battery megafactories. Nvidia says Omniverse is being used by major automotive brands for a variety of purposes: Lotus uses it to assemble virtual welding stations, Mercedes-Benz to plan and optimize assembly lines, and Lucid Motors to create digital stores with accurate design data. BMW is partnering with Idealworks to train factory robots and plan an EV factory entirely in Omniverse.
In general, there were too many ads and partnerships to mention, but perhaps the last major milestone came from the production side. Nvidia announced a breakthrough in chip production speed and energy efficiency with the introduction of “cuLitho”, a software library designed to speed up computational lithography by up to 40 times.
Jensen explained that cuLitho can significantly reduce the amount of computation and data processing required in chip design and manufacturing. This will lead to a significant reduction in the consumption of electricity and resources. TSMC and semiconductor equipment supplier ASML plan to introduce cuLitho into their manufacturing processes.
Source link