"Mellanox's smart Switch-IB 2 switches enable the University of Tokyo to leverage new in-network computing capabilities, allowing data algorithms to be managed and executed by the network devices," said Gilad Shainer, vice president, marketing at Mellanox Technologies. "EDR 100G InfiniBand offers world-leading performance, scalability and efficiency enabling the University of Tokyo to be at the forefront of research, and scientific discovery." "We are pleased to have installed Mellanox's InfiniBand high-performance solutions to drive our new integrated supercomputer system," said Professor Hiroshi Nakamura, Director of the Information Technology Center, The University of Tokyo. "Our new system is key to advancing ongoing research and expanding the exciting work being carried out that is leveraging computational science and engineering, computer science, data analysis, and machine learning." Mellanox InfiniBand adapters provide the highest performing interconnect solution for High-Performance Computing, Enterprise Data Centers, Web 2.0, Cloud Computing, and embedded environments.
A typical Aurora server platform includes from one to four InfiniBand adapters. The top-of-the-line Aurora platform utilizes 32 ConnectX adapters to support 64 vector engines in a single system. The NEC SX-Aurora TSUBASA systems leverage general-purpose vector-based NEC coprocessors and Mellanox in-network computing interconnect accelerators to achieve the highest application performance and scalability.
InfiniBand is set to hit 200Gbps (bits per second) in products that were announced Thursday, potentially accelerating machine-learning platforms as well as HPC (high-performance computing) systems. The massive computing performance of new servers equipped with GPUs calls for high network speeds, and these systems are quickly being deployed to handle machine-learning tasks, Dell'Oro Group analyst Sameh Boujelbene said. So-called HDR InfiniBand, which will be generally available next year in three sets of products from Mellanox Technologies, will double the top speed of InfiniBand. It will also have twice the top speed of Ethernet. But the high-performance crowd that's likely to adopt this new interconnect is a small one, Boujelbene said.
ConnectX-5 introduces smart offloading engines that enable the highest application performance while maximizing data center return on investment. Furthermore, ConnectX-5 is the first PCI Express 3.0 and 4.0 compatible adapter, enabling greater flexibility and future-proofing for the data center. With the exponential growth of data and the increase in businesses that takes advantage of real-time data processing for high performance computing (HPC), data analytics, machine learning, national security and'Internet of Things' applications, the market needs not only the fastest interconnect available, but also interconnect intelligence that can perform data algorithms as the data moves throughout the data center. The new intelligent ConnectX-5 100G adapter enables the most advanced real-time in-network computing engines to unleash business opportunities and new technological developments. "The new ConnectX-5 100G adapter further enables high performance, data analytics, deep learning, storage, Web 2.0 and more applications to perform data-related algorithms on the network to achieve the highest system performance and utilization," said Gilad Shainer, vice president, marketing at Mellanox Technologies.