The University of Cambridge Chooses Mellanox FDR InfiniBand

System utilizes Mellanox's performance-leading Connect-IB adapters to deliver over 100Gb/s node-to-node bandwidth and message rate of 137 million messages per second
Tuesday, 10 December 2013    Source: http://www.mellanox.com

SUNNYVALE, CA and YOKNEAM, ISRAEL - December 10, 2013 - Mellanox® Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced that its FDR InfiniBand solution provides the University of Cambridge GPU-based supercomputer with leading, scalable performance. The system has a sustained performance of over 250 TF and ranked 166th on the November 2013 TOP500 list of supercomputers. The system was designed to increase energy efficient high-performance computing and ranked second in the November Green500 list that ranks supercomputers by energy efficiency.

The newly deployed supercomputer is partly funded by STFC to drive the SKA computing system development within the newly formed "SKA Open Architecture LAB." The SKA is a multinational collaboration which is building the world's largest radio telescope which, at its center, is a requirement for the world's largest streaming data processor, many times larger than the most powerful HPC system in operation today. The new system will take a central role driving system development for the SKA placing STFC and the University of Cambridge at the forefront of large scale big-data science.

"The network design was specifically architected to provide the highest I/O bandwidth for large scale big-data challenges and to have the highest message possible for large parallel application scaling," said Paul J. Calleja, Director of High Performance Computing at the University of Cambridge. "We chose Mellanox's end-to-end FDR InfiniBand interconnects to connect the system, specifically by using their Connect-IB adapters in a dual rail network, as well as utilize the NVIDA GPUDirect RDMA communication acceleration to significantly increase the systems parallel efficiency."

"We are pleased to have Mellanox's FDR InfiniBand solution as the interconnect of choice for the University of Cambridge's supercomputer, the UK's fastest academic-based supercomputer," said Gilad Shainer, vice president of marketing at Mellanox Technologies. "Utilizing Mellanox's Connect-IB adapters, the University of Cambridge is able to take advantage of the adapter's leading message rate and bandwidth performance to enable fundamental advances in many areas of astrophysics and cosmology."

Connect-IB is the world's most scalable server and storage adapter solution for High-Performance Computing (HPC), Web 2.0, cloud, Big Data, financial services, virtualized data centers and storage environments. Connect-IB adapters deliver the highest throughput of 100Gb/s utilizing PCI Express 3.0 x16, unmatched scaling with innovative transport services, sub-microsecond latency and 137 million messages per second - 4X higher message rate over competing solutions.

Available today, Mellanox's FDR 56Gb/s InfiniBand solution includes Connect-IB adapter cards, SwitchX®-2 based switches (from 12-port to 648-port), fiber and copper cables, and ScalableHPC accelerator and management software.

View our Mellanox range

RSS Feed

Sign up to our RSS feed and get the latest news delivered as it happens.

click here

Test out any of our solutions at Boston Labs

To help our clients make informed decisions about new technologies, we have opened up our research & development facilities and actively encourage customers to try the latest platforms using their own tools and if necessary together with their existing hardware. Remote access is also available

Contact us

ISC 2024

Latest Event

ISC 2024 | 13th - 15th May 2024, Congress Center, Hamburg

International Super Computing is a can't miss event for anyone interested in HPC, tech, and more.

more info