If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Companies running AI applications often need as much computing muscle as researchers who use supercomputers do. IBM's latest system is aimed at both audiences. The company last week introduced its first server powered by the new Power9 processor designed for AI and high-performance computing. The powerful technologies inside have already attracted the likes of Google and the US Department of Energy as customers. The new IBM Power System AC922 is equipped with two Power9 CPUs and from two to six NVIDIA Tesla V100 GPUs.
In a world that requires increasing amounts of compute power to handle the resource-intensive demands of workloads like artificial intelligence and machine learning, IBM enters the fray with its latest generation Power chip, the Power9. The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it's releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud. "We generally take our technology to market as a complete solution," Brad McCredie, IBM fellow and vice president of cognitive systems explained. The company has designed the new chip specifically to improve performance on common AI frameworks like Chainer, TensorFlow and Caffe, and claims an increase for workloads running on these frameworks by up to almost 4x.
In this special guest feature from Scientific Computing World, David Yip, HPC and Storage Business Development at OCF, provides his take on the place of GPU technology in HPC. There was an interesting story published earlier this week in which NVIDIA's founder and CEO, Jensen Huang, said: 'As advanced parallel-instruction architectures for CPU can be barely worked out by designers, GPUs will soon replace CPUs'. There are only so many processing cores you can fit on a single CPU chip. There are optimized applications that take advantage of a number of cores, but typically they are used for sequential serial processing (although Intel is doing an excellent job of adding more and more cores to its CPUs and getting developers to program multicore systems). By contrast, a GPU has massively parallel architecture consisting of many thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.
All the shiny and zippy hardware in the world is meaningless without software, and that software can only go mainstream if it is easy to use. It has taken Linux two decades to get enterprise features and polish, and Windows Server took as long, too. So did a raft of open source middleware applications for storing data and interfacing back-end databases and datastores with Web front ends. Now, it is time for HPC and AI applications, and hopefully, it won't take this long. As readers of The Next Platform know full well, HPC applications are not new.
There's a big connection between my love for water sports and hardware design -- both involve observing waves and planning several moves ahead. Four years ago, when we started sketching the POWER9 chip from scratch, we saw an upsurge of modern workloads driven by artificial intelligence and massive data sets. We are now ready to ride this new tide of computing with POWER9. It is a transformational architecture and an evolutionary shift from the archaic ways of computing promoted by x86. POWER9 is loaded with industry-leading new technologies designed for AI to thrive.
IBM is doubling down on AI: releasing new software to help train machine-learning models and talking up the potential for its new Power9 systems to accelerate intelligent software. Today IBM unveiled new software that will make it easier to train machine-learning models to take decisions and extract insights from big data. The Deep Learning Impact software tools will help users develop AI models using popular open-source, deep-learning frameworks, such as TensorFlow and Caffe, and will be added to IBM's Spectrum Conductor software from December. Alongside the software reveal, IBM has been talking up new systems based around its new Power9 processor -- which are on display at this year's SC17 event. IBM says these systems are tailored towards AI workloads, due to their ability to rapidly shuttle data between between Power9 CPUs and hardware accelerators, such as GPUs and FPGAs, commonly used both in training and running machine-learning models.
Lenovo has announced new hardware and software for firms building machine-learning systems, as the Chinese tech giant double down on AI. Lenovo expects firms will increasingly rely on AI systems to make rapid decisions based on the vast amount of data being generated, predicting will be 44 trillion gigabytes of data will exist by 2020. To serve the fast-growing market, Lenovo today announced new hardware and software for streamlining machine-learning on high-performance computer systems. The ThinkSystem SD530, a two-socket server in a 0.5U rack form factor, is now available with the latest NVIDIA GPU accelerators and Intel Xeon Scalable family CPUs. By including the option of adding NVIDIA's Tesla V100 GPU accelerator, Lenovo is giving businesses the ability to massively boost the performance of AI-related tasks.
There are a number of machine learning (ML) architectures that utilize deep neural networks (DNNs), including AlexNet, VGGNet, GoogLeNet, Inception, ResNet, FCN, and U-Net. These in turn run on frameworks like Berkeley's Caffe, Google's TensorFlow, Torch, Microsoft's Cognitive Toolkit (CNTK), and Apache's mxnet. Of course, support for these frameworks on specific hardware is required to actually run the ML applications. Each framework has advantages and disadvantages. For example, Caffe is an easy platform to start with, especially since ones of its popular uses is image recognition.
In a blog post today, Intel (NASDAQ:INTC) CEO Brian Krzanich announced the Nervana Neural Network Processor (NNP). The Intel Nervana NNP promises to revolutionize AI computing across myriad industries. Using Intel Nervana technology, companies will be able to develop entirely new classes of AI applications that maximize the amount of data processed and enable customers to find greater insights – transforming their businesses... We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models. This puts us on track to exceed the goal we set last year of achieving 100 times greater AI performance by 2020.
September 28, 2017 -- Cirrascale Cloud Services, a premier provider of multi-GPU deep learning cloud solutions, today announced it will begin offering NVIDIA Tesla V100 GPU accelerators as part of its dedicated, multi-GPU deep learning cloud service offerings. The Tesla V100 specifications are impressive with 16GB of HBM2 stacked memory, 5,120 CUDA cores and 640 Tensor Cores, providing 7.8 TFlops double-precision performance, 15.7 TFlops single-precision performance, and 125 TFlops mixed-precision deep learning performance. "Deploying the new NVIDIA Tesla V100 GPU accelerators within the Cirrascale Cloud Services platform will enable their customers to accelerate deep learning and HPC applications using the world's most advanced data center GPUs." To learn more about Cirrascale Cloud Services and its unique dedicated, multi-GPU cloud solutions, please visit http://www.cirrascale.cloud Cirrascale Cloud Services, Cirrascale and the Cirrascale Cloud Services logo are trademarks or registered trademarks of Cirrascale Cloud Services LLC.