If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Top public cloud vendors want you to store massive data sets in their platforms to run complex Machine Learning algorithms. Apart from offering affordable compute and storage services based on pay-as-you-go pricing model, they are also luring the customers by bringing the latest GPU technology to the cloud. Why is the sudden rush in offering GPUs in the cloud? The answer is simple – It's the rise of Machine Learning. Amazon, Google, IBM, and Microsoft want you to make their cloud the preferred platform for storing, processing, analyzing, and querying data.
Like other major hyperscale web companies, China's Tencent, which operates a massive network of ad, social, business, and media platforms, is increasingly reliant on two trends to keep pace. The first is not surprising--efficient, scalable cloud computing to serve internal and user demand. The second is more recent and includes a wide breadth of deep learning applications, including the company's own internally developed Mariana platform, which powers many user-facing services. When the company introduced its deep learning platform back in 2014 (at a time when companies like Baidu, Google, and others were expanding their GPU counts for speech and image recognition applications) they noted their main challenges were in providing adequate compute power and parallelism for fast model training. "For example," Mariana's creators explain, "the acoustic model of automatic speech recognition for Chinese and English in Tencent WeChat adopts a deep neural network with more than 50 million parameters, more than 15,000 senones (tied triphone model represented by one output node in a DNN output layer) and tens of billions of samples, so it would take years to train this model by a single CPU server or off-the-shelf GPU."
I am really excited to announce that the general availability of the Azure N-Series will be December 1st, 2016. Azure N-Series virtual machines are powered by NVIDIA GPUs and provide customers and developers access to industry-leading accelerated computing and visualization experiences. I am also excited to announce global access to the sizes, with N-series available in South Central US, East US, West Europe and South East Asia, all available on December 1st. We've had thousands of customers participate in the N-Series preview since we launched it back in August. We've heard positive feedback on the enhanced performance and the work we have down with NVIDIA to make this a completely turnkey experience for you.
In a new initiative, UK-based PC systems maker and retailer Scan 3XS is providing remote access to Nvidia DGX-1 Deep Learning Supercomputers. To allow customers to decide whether the significant investment involved in acquiring a DGX-1 is for them, Scan has begun a DGX-1 Proof of Concept program to allow end users to run custom data processing tests on one of its own deep learning machines. With such a system "you can immediately shorten data processing time, visualize more data, accelerate deep learning frameworks, and design more sophisticated neural networks," says Nvidia. At its heart the DGX-1 is based around 8x Nvidia Tesla P100 GPU accelerators based upon Nvidia's newest Pascal architecture.
Super Micro Computer, Inc. (SMCI), a global leader in compute, storage, networking technologies and green computing today announced the general availability of its SuperServer solutions optimized for NVIDIA Tesla P100 accelerators with the new Pascal GPU architecture. "The new SuperServers deliver superior energy-efficient performance for compute-intensive data analytics, deep learning and scientific applications while minimizing power consumption." With the convergence of Big Data Analytics, the latest GPU architectures, and improved Machine Learning algorithms, Deep Learning applications require processing power of multiple GPUs that must communicate efficiently and effectively to expand the GPU network. Supermicro (SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide.