Results


Hello 2017, and Recap of Top 10 Posts of 2016

#artificialintelligence

As we kick off what will surely be another very exciting year of progress in artificial intelligence, machine learning and data science, we start with a quick recap of our "Top 10" most popular posts (based on aggregate readership) from the year just concluded. We also show how Microsoft R Server can harness the deep learning capabilities of MXNet and Azure GPUs using simple R scripts. Few things in life can beat "free", and that was certainly true about our free eBook on creating intelligent apps using SQL Server and R. You can now embed intelligent analytics and data transformations right in your database, and make transactions intelligent in real time. We also announced that, on Windows, Microsoft R Server (MRS) would be included in SQL Server 2016.


Intel To Duke It Out With Nvidia In The Coprocessor Market

Forbes

In our previous analysis, we discussed how Intel is competing with Nvidia in the data center coprocessor market. These computational capabilities make GPUs ideally suited for use as coprocessors in High Performance Computing environments. It is worth noting that GPUs have a parallel architecture with hundreds of cores, making it highly suited for matrix and vector operations in both deep learning and 3D computer graphics. Currently, it is debatable as to which one – Intel's Xeon Phi processor family (formerly code-named Knightsbridge) or Nvidia's Tesla processors – is better in terms of performance.


NVIDIA Tesla P100 Available on Google Cloud Platform NVIDIA Blog

#artificialintelligence

NVIDIA Tesla P100 GPUs and Tesla K80 GPUs will be available on Google Cloud Platform, starting early next year. On Google Cloud Platform, Tesla P100 GPUs will be available to Google Compute Engine and Google Cloud Machine Learning users around the world. The Tesla K80 GPU accelerator delivers exceptional performance, with increased throughput that allows researchers to advance their scientific discoveries and developers to boost their web services. Learn more about NVIDIA GPU cloud computing and read Google's announcement.


NVIDIA Tesla P100 Available on Google Cloud Platform NVIDIA Blog

#artificialintelligence

NVIDIA Tesla P100 GPUs and Tesla K80 GPUs will be available on Google Cloud Platform, starting early next year. On Google Cloud Platform, Tesla P100 GPUs will be available to Google Compute Engine and Google Cloud Machine Learning users around the world. The Tesla K80 GPU accelerator delivers exceptional performance, with increased throughput that allows researchers to advance their scientific discoveries and developers to boost their web services. Learn more about NVIDIA GPU cloud computing and read Google's announcement.


Microsoft Azure networking is speeding up, thanks to custom hardware

PCWorld

The chips have been put to use in a variety of first-party Microsoft services, and they're now starting to accelerate networking on the company's Azure cloud platform. In addition to improving networking speeds, the FPGAs (which sit on custom, Microsoft-designed boards connected to Azure servers) can also be used to improve the speed of machine-learning tasks and other key cloud functionality. Azure CTO Mark Russinovich said using the FPGAs was key to helping Azure take advantage of the networking hardware that it put into its data centers. In the future, the Accelerated Networking service will expand to Azure's other virtual machine types and operating systems.


Nvidia releases Pascal GPUs for neural networks

ZDNet

Compared to using an Intel Xeon E5-2690v4, which was launched earlier this year, Nvidia claimed its offering is 40 times more power efficient while being 45 times faster to respond. "A single server with a single Tesla P4 replaces 13 CPU-only servers for video inferencing workloads, delivering over 8x savings in total cost of ownership, including server and power costs," the company boasted. "Most customers will tell you that a GPU becomes a one-off environment that they need to code and program against, whereas they are running millions of Xeons in their datacentre, and the more they can use single instruction set, single operating system, single operating environment for all of their workloads, the better the performance of lower total cost of operation," she said. Nvidia today also announced its Drive PX 2 platform that is set to be deployed by Baidu in its self-driving car.


IBM's new servers to propel AI, Deep Learning & Advanced Analytics

#artificialintelligence

Featuring a new chip, the three Linux-based servers incorporate innovations from the OpenPOWER community and are a part of the Power Systems LC lineup, that IBM claims, delivers higher levels of performance and greater computing efficiency than the x86-based server. The servers clam to have been co-developed with global technology companies and the new Power Systems are uniquely designed to propel artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads to help businesses and cloud service providers save data center costs. Big Blue states that the new IBM Power System S822LC for High Performance Computing server, has been developed through open collaboration. Ian Buck, VP, Accelerated Computing, NVIDIA states, "The open and collaborative model of the OpenPOWER Foundation has propelled system innovation forward in a major way with the launch of the IBM Power System S822LC for High Performance Computing," "NVIDIA NVLink provides tight integration between the POWER CPU and NVIDIA Pascal GPUs and improved GPU-to-GPU link bandwidth to accelerate time to insight for many of today's most critical applications like advanced analytics, deep learning and AI."


IBM Linux Servers Designed to Accelerate Artificial Intelligence, Deep Learning and Advanced Analytics

#artificialintelligence

Collaboratively developed with some of the world's leading technology companies, the new Power Systems are uniquely designed to propel artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads, which can help businesses and cloud service providers save money on data center costs. "NVIDIA NVLink provides tight integration between the POWER CPU and NVIDIA Pascal GPUs and improved GPU-to-GPU link bandwidth to accelerate time to insight for many of today's most critical applications like advanced analytics, deep learning and AI." Among those first in line to receive shipments are a large multinational retail corporation and the U.S. Department of Energy's Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL). Lower Costs, Less Server Sprawl Fully compatible in Linux-based cloud environments, IBM's Power LC servers are optimized for data-rich applications and can deliver superior data center efficiency.


Cognitive Technologies and Automated Analytics: Is There a Difference?

#artificialintelligence

For identical reasons, cognitive technology has made a huge splash in business -- it just topped the International Institute for Analytics (IIA) list of top 2016 analytics trends, and is expected to "subsume" automated analytics. A more useful definition comes from Deloitte, which describes AI as, "computer systems able to perform tasks that normally require human intelligence." Very simply stated, deep learning allows computers to process many layers (10) of complex data sets simultaneously. In fact, as IIA co-founder Tom Davenport notes (via TechTarget), analytics professionals have long used deep learning, neural networks, and related technologies like logistic regression in their products.