Arm-based superchip and BlueField-3 DPU power innovative architecture to enable generative AI-driven wireless communications
Computex—NVIDIA and SoftBank Corp. today announced that NVIDIA GH200 Grace Hopper™ Super Chip And SoftBank plans to roll this out at new distributed AI data centers across Japan.
To pave the way for rapid global deployment of generative AI applications and services, SoftBank is collaborating with NVIDIA to build a data center capable of hosting generative AI and wireless applications on a common multi-tenant server platform . This reduces costs and provides more benefits. Energy efficiency.
The platform uses a new one NVIDIA MGX™ Reference Architecture Powered by the Arm Neoverse-based GH200 superchip, it is expected to improve performance, scalability, and resource utilization for application workloads.
“As we enter an era in which AI and society coexist, demand for data processing and electricity will rapidly increase.” Junichi Miyagawa, President and CEO of SoftBank Corp., said, “SoftBank is We will provide next-generation social infrastructure that supports a hyper-digital society.By collaborating with NVIDIA, our infrastructure will be able to achieve significantly higher performance through the use of AI, including RAN optimization. We also expect it to help create a network of interconnected data centers that can be used to reduce energy consumption, share resources, and host a variety of generative AI applications.”
NVIDIA Founder and CEO Jensen Huang said: “The demand for accelerated computing and generative AI is fundamentally changing data center architecture.” “NVIDIA Grace Hopper is an innovation designed to process and scale out generative AI services. As with other forward-thinking efforts, SoftBank is leading the world in building communications networks built to host generative AI services.”
The new data center will be more evenly distributed across the footprint than previously used and will handle both AI and 5G workloads. This results in lower latency, significantly lower overall energy costs, and the ability to operate better at peak capacity.
SoftBank is looking to create 5G applications for autonomous driving, AI factories, augmented and virtual reality, computer vision, and digital twins.
Virtual RAN for record-breaking throughput
NVIDIA Grace Hopper and NVIDIA BlueField®-3 data processing unit Accelerate software-defined 5G vRAN and generated AI applications without custom hardware accelerators or dedicated 5G CPUs. moreover, NVIDIA Spectrum Ethernet Switch BlueField-3 provides a high-precision timing protocol for 5G.
Based on published data on 5G accelerators, this solution delivers breakthrough 5G speeds in a 1U MGX-based server design with NVIDIA acceleration, delivering industry-leading throughput with 36 Gbps of downlink capacity . Carriers have struggled to achieve such high downlink capacity using industry standard servers.
New reference architecture
NVIDIA MGX helps system manufacturers and hyperscale customers use a wide range of AI, HPC, and NVIDIA Omniverse™ Application.
By incorporating NVIDIA Aerial™ Software For high-performance, software-defined, cloud-native 5G networks, these 5G base stations allow carriers to dynamically allocate computing resources and achieve 2.5x power efficiency compared to competing products. Masu.
“The future of generative AI requires high-performance, energy-efficient computing like NVIDIA’s Arm Neoverse-based Grace Hopper superchip,” said Arm CEO Rene Haas. “By combining Grace Hopper with his NVIDIA BlueField DPU, new SoftBank 5G data centers will be able to run the most demanding compute and memory-intensive applications, supporting software-defined 5G and AI on Arm. This brings about dramatic efficiency improvements.”