WekaIO links up with Nvidia GPU Direct to uncork AI I/O bottlenecks – Blocks and Files

#artificialintelligence 

WekaIO has devised a "production-ready" framework to help artificial intelligence installations to speed up their storage data transfers. The basic deal is that WekaIO supports Nvidia's GPUDirect storage with its NVMe file storage. Weka says its solution can deliver 73 GB/sec of bandwidth to a single GPU client. The Weka AI framework omprises customisable reference architectures and software development kits, centred on Nvidia GPUs, Mellanox networking, Supermicro servers (other server and storage hardware vendors are also supported) and Weka Matrix parallel file system software. Paresh Kharya, director of product management for accelerated computing at Nvidia, provided a quote: "End-to-end application performance for AI requires feeding high-performance Nvidia GPUs with a high-throughput data pipeline. Weka AI leverages GPUDirect storage to provide a direct path between storage and GPUs, eliminating I/O bottlenecks for data intensive AI applications."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found