Goto

Collaborating Authors

 new ai supercomputer


Google's new AI supercomputer is 'a unique approach to AI development, claims expert

#artificialintelligence

Google recently announced they have developed a unique artificial intelligence (AI) supercomputer that is faster, more efficient, and more powerful than NVIDIA systems. Nvidia is the reigning champion of AI model training and deployment, dominating over 90% of the market, according to CNBC. The great AI race has been raging on for a while now in Big Tech, and Google has been developing AI chips called Tensor Processing Units (TPUs) since 2016. "Google has chosen a unique approach to AI development by creating its own'Tensor Processing Unit' (TPU) architecture, rather than relying on specialised GPUs [graphic processing units] from Nvidia," founder of Elo AI, Matt Falconer explains. "This decision allows Google to reduce their dependence on third-party vendors and achieve vertical integration across its entire AI stack," Falconer added.


HPE and Cerebras build new AI supercomputer at LRZ in Munich

#artificialintelligence

HPE and Cerebras Systems have built a new AI supercomputer in Munich, Germany, pairing a HPE Superdome Flex with the AI accelerator technology from Cerebras for use by the scientific and engineering community. The new system, created for the Leibniz Supercomputing Center (LRZ) in Munich, is being deployed to meet the current and expected future compute needs of researchers, including larger deep learning neural network models and the emergence of multi-modal problems that involve multiple data types such as images and speech, according to Laura Schulz, LRZ's head of Strategic Developments and Partnerships. "We're seeing an increase in large data volumes coming at us that need more and more processing, and models that are taking months to train, we want to be able to speed that up," Schulz said. "And then we're also seeing multi-modal problems, such as integration of natural language processing (NLP) and medical imaging or documents, so we have this complexity, we have this the need for faster, we have this need for bigger that's coming from our user side, from our facility side, and we need to make sure that we're constantly evaluating to have these different novel architectures, to have different usage models to be able to understand all that." The LRZ team decided that the Cerebras technology, with its large shared memory and scalability, was a good match for the "pain points" they were trying to resolve, she said.


HPE is building a rapid AI supercomputer powered by the world's largest CPU

#artificialintelligence

Hewlett Packard Enterprise (HPE) has announced it is building a powerful new AI supercomputer in collaboration with Cerebras Systems, maker of the world's largest chip. The new system will be made up of a combination of HPE Superdome Flex servers and Cerebras CS-2 accelerators, which are powered by the monstrous Wafer-Scale Engine 2 (WSE-2) processor. The nameless supercomputer is expected to go live later this summer at the Leibniz Supercomputing Center (LRZ) in Bavaria, providing researchers with a new resource to help accelerate research projects on topics ranging from medical imaging to aerospace engineering. Unveiled by Cerebras in April last year, the WS2-E is designed expressly to accelerate AI training and inference workloads. The chip houses a staggering 2.6 trillion transistors and 850,000 AI cores spread across 46,225 mm(2) of silicon, supposedly delivering the AI performance of hundreds of GPUs.


What is Meta's New AI Supercomputer?

#artificialintelligence

In June last year Tesla unveiled it AI supercomputer. At the time it was the (5th most powerful in the world) to train self-driving AI. It is being used to train the neural nets powering Tesla's Autopilot and upcoming self-driving AI called Dojo supercomputer. Six months later not to be undone Meta has a similar plan. So why are BigTech companies working on supercomputers to train AI? Social media conglomerate Meta is investing around $10 billion a year on the Metaverse, but it needs better AI to power that Metaverse dream as well.


Meta says its new AI supercomputer will be the world's fastest by mid-2022

Engadget

Meta has completed the first phase of a new AI supercomputer. Once the AI Research SuperCluster (RSC) is fully built out later this year, the company believes it will be the fastest AI supercomputer on the planet, capable of "performing at nearly 5 exaflops of mixed precision compute." The company says RSC will help researchers develop better AI models that can learn from trillions of examples. Among other things, the models will be able to build better augmented reality tools and "seamlessly analyze text, images and video together," according to Meta. Much of this work is in service of its vision for the metaverse, in which it says AI-powered apps and products will have a key role. "We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they can seamlessly collaborate on a research project or play an AR game together," technical program manager Kevin Lee and software engineer Shubho Sengupta wrote in a blog post.


IBM's new AI supercomputer can argue, rebut and debate humans

#artificialintelligence

IBM once again gave the world an impressive update on the competition between humans and machines. The company known for building supercomputers that can defeat grand master chess players and champion Jeopardy contestants, hosted another Man vs. Machine contest in San Francisco on Monday. A system that IBM calls Project Debater faced off against two humans in two separate debates. The verdict: Humans are still ahead, but the gap is closing. Debater won one of the two debates as voted by the audience, but who won was almost beside the point.


Cambridge to get new AI supercomputer

#artificialintelligence

The University of Cambridge is set to receive a new AI supercomputer as part of a £10 million partnership between the Engineering and Physical Sciences Research Council (EPSRC), the Science and Technology Facilities Council (STFC). The system, which is supported by Cambridge's Research Computing Service, aims to help companies to create real business value from the use of advanced computing infrastructure. The supercomputer is part of the UK government's AI Sector Deal, which involves more than 50 leading technology companies and organisations. The deal is worth almost £1 billion, including around £300 million of private sector investment in AI. 'AI research requires supercomputing capacity capable of processing huge amounts of data at very high speeds, said Dr Paul Calleja, Director of the University's Research Computing Service. 'Cambridge's supercomputer provides researchers with the fast and affordable supercomputing power they need for AI work.' UK Secretary of State for Digital, Culture, Media and Sport Matt Hancock stated: 'The UK must be at the forefront of emerging technologies, pushing boundaries and harnessing innovation to change people's lives for the better.


Cambridge receives £10 million in funding for new AI supercomputer

#artificialintelligence

The new AI supercomputer is a £10 million partnership between the Engineering and Physical Sciences Research Council (EPSRC), the Science and Technology Facilities Council (STFC) and the University. Capable of solving the largest scientific and industrial challenges at very high speeds, the supercomputer is supported by Cambridge's Research Computing Service. The aim is to help companies to create real business value from advanced computing infrastructures. The supercomputer is part of the UK government's AI Sector Deal, which involves more than 50 leading technology companies and organisations. The deal is worth almost £1 billion, including almost £300 million of private sector investment into AI.