HPE Chases Deep Learning With GPU Laden Apollo Systems

#artificialintelligence 

With machine learning taking off among hyperscalers and others who have massive amounts of data to chew on to better serve their customers and traditional simulation and modeling applications scaling better across multiple GPUs, all server makers are in an arm's race to see how many GPUs they can cram into their servers to make bigger chunks of compute available to applications. As the GPU Technical Conference hosted by Nvidia is kicking off in San Jose, Hewlett-Packard Enterprise, which is the dominant peddler of servers in the world with Dell nipping at its heels and a slew of others who aspire to be number three, rolled out a new dense hybrid system that can pack twice as many GPU accelerators in a chassis as its predecessor as well as some companion Lustre appliances that will also be able to run object storage from a number of vendors as well. The Apollo 6500 hybrid servers are the follow-ons to the ProLiant SL6000 "scalable systems" product line that originally debuted back in June 2009 to compete against Dell's custom machines that are sold by its Data Center Solutions (DCS) division. The SL6500s, which were dense machines designed explicitly to have lots of GPU accelerators hanging off Xeon CPUs, rolled out shortly after that and were updated last in November 2012. With the SL270s Gen8 node that HPE offered at the time, its densest compute element, a 4U SL6500 enclosure could have two half-width server sleds, each with two Xeon E5 processors and up to eight single-wide Tesla M2070Q, M2075, M2090, or K10 GPU coprocessor cards rated at no more than 225 watts each.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found