ARM on Monday announced a new chip design targeting high-performance computing -- an update to its ARMv8-A architecture, known as the Scalable Vector Extension (SVE). The new design significantly extends the vector processing capabilities associated with AArch64 (64-bit) execution, allowing CPU designers to choose the most appropriate vector length for their application and market, from 128 to 2048 bits. SVE will also allow advanced vectorizing compilers to extract more fine-grain parallelism from existing code. "Immense amounts of data are being collected today in areas such as meteorology, geology, astronomy, quantum physics, fluid dynamics, and pharmaceutical research," ARM fellow Nigel Stephens wrote. HPC systems over the next five to 10 years will shoot for exascale computing, he continued.
Last week, Tencent Graph Computing (TGraph) officially announced the release of Plato, an open source high-performance graph computing framework that meets the ultra-large-scale graph computing requirement of billion-level nodes. Plato can shorten algorithm computing time from days to just minutes and reduce the number of servers required to complete a task from hundreds to only about ten -- feats unattainable by any other mainstream distributed graph computing framework. Graph computing combines data from different sources and of different kinds into the same graph to find correlations and connections which are difficult to discern through distinct data analysis approaches. Graph computing is increasingly used as a data analysis and mining tool across social networks and recommendation systems, as well as in the cyber security, text retrieval and biomedical fields. Plato can provide efficient offline graph computing and graph representation learning for social network data on the massive scale produced by Tencent.
In this webinar, Martijn de Vries, CTO at Bright Computing and Robert Stober, Director of Product Management at Bright Computing, discuss the convergence of HPC and AI in the context of current industry trends and practices being used by organizations. They will discuss and demonstrate the convergence of HPC and AI on a shared infrastructure using Bright auto-scaler to enable efficient use of compute resources based on workload demand and policies, and also cover how to extend HPC/A.I. infrastructure to edge locations. Don't miss out on this opportunity to gain valuable insight into innovative ways HPC and AI are being used together today.
Moore's Law posits that the number of transistors on a microprocessor -- and therefore their computing power -- will double every two years. It's held true since Gordon Moore came up with it in 1965, but its imminent end has been predicted for years. As long ago as 2000, the MIT Technology Review raised a warning about the limits of how small and fast silicon technology can get. The thing is, Moore's Law isn't really a law. Moore didn't describe an immutable truth, like gravity or the conservation of momentum.