News

Deploying artificial intelligence (AI) workloads that use graphics processing units (GPUs) in datacentres requires specific network hardware and optimisations.
At the GTC event in March of 2025, we heard all about NVIDIA's planned GPU road map every 12 to ... that try to discover the underlying network topology using localization techniques.
and the GPU utilization, of course, being a prime factor if you think of the overall performance of a training cluster or an inference cluster. So to that standpoint, X-Series does bring in a lot ...
September 29, 2016—NetSpeed Systems Inc., announced today the release of its Gemini 3.0 cache-coherent network-on-chip IP that maximizes ... It supports up to 64 fully cache-coherent CPU clusters, GPU ...
DriveNets enhances its Network Cloud-AI platform with multi-tenancy and multi-site features for GPU clusters spanning up to ...
Reconfigurable datacenter networks (RDCNs), which can adapt their topology dynamically, based on innovative optical switching ...
When building large-scale AI GPU clusters for training or inference, the backend network should be high-performance, lossless, and predictable to ensure maximum GPU utilization. This is hard to ...