TMCnet - World's Largest Communications and Technology Community



Mellanox Demos ConnectX-3 FDR 56Gb/s InfiniBand Solutions Along with NVIDIA

Network Management

Network Management Featured Article.

June 18, 2013

Mellanox Demos ConnectX-3 FDR 56Gb/s InfiniBand Solutions Along with NVIDIA

By Jayashree Adkoli
TMCnet Contributor

InfiniBand connectivity solutions, featuring NVIDIA (News - Alert) Tegra ARM processor, were exhibited by Mellanox Technologies, Ltd., a provider of end-to-end interconnect solutions for data center servers and storage systems, along with NVIDIA.

The new InfiniBand connectivity solutions showcased were the ConnectX-3 FDR 56Gb/s InfiniBand solutions, which have the capability to deliver very-high application performance compared to contending interconnect solutions available in the market. These InfiniBand solutions are switched fabric communications serial links operating at fourteen data rate (FDR).

Launched recently, these FDR 56Gb/s InfiniBand solutions run on the SECO development platform, which is powered by an NVIDIA Tegra quad-core ARM processor.

According to Mellanox (News - Alert), the performance capabilities demonstrated by FDR 56Gb/s InfiniBand are critical to high-performance computing (HPC), Web 2.0, cloud, big data and financial applications, which requires the highest bandwidth and the lowest latency.

“Mellanox and NVIDIA are working together to bring all the benefits of a modern HPC network to ARM (News - Alert)-based platforms,” said Ian Buck, general manager of graphics processing unit (GPU) computing software at NVIDIA, in a statement. “This technology demo, coupled with support for ARM platforms in the latest release of the CUDA parallel programming toolkit, provides the foundation for developers to build out the ARM HPC application ecosystem.”

Alessandro Santini, HPC sales at SECO, said in a press release that Mellanox performed research and development work found that facilitated SECO to integrate InfiniBand technology on its NVIDIA Tegra based development kit. This helped the ARM+ GPU architecture to be implemented in real HPC clusters.

Various benchmarks were performed on multiple applications using the new FDR 56Gb/s InfiniBand solutions. Some of them include the applications related to computational fluid dynamics, molecular dynamics, structural analysis, weather research and forecasting (WRF) and more.

During the benchmarks demonstrations it was observed that the new FDR 56Gb/s InfiniBand solutions delivered demonstrated a 20 to 30 percent higher performance, with sixteen compute nodes compared to QDR 40Gb/s InfiniBand; and 100 to 200 percent higher performance compared to 10 and 40 Gigabit Ethernet.

Gilad Shainer, vice president of marketing at Mellanox Technologies, said in a statement that “This particular technology demonstration represents a significant development milestone for adoption of Mellanox’s InfiniBand solutions in new CPU platforms such as NVIDIA Tegra-based ARM platforms.”

Together, Mellanox and NVIDIA are demonstrating the Tegra-based platforms, interconnected by the FDR 56Gb/s InfiniBand solutions, at the on going International Supercomputing Conference (ISC), held in Leipzig, Germany.

Edited by Ashley Caputo

Network Management Home

Comments powered by Disqus

Technology Marketing Corporation

2 Trap Falls Road Suite 106, Shelton, CT 06484 USA
Ph: +1-203-852-6800, 800-243-6002

General comments: [email protected].
Comments about this site: [email protected].


© 2022 Technology Marketing Corporation. All rights reserved | Privacy Policy