Results


Google's Second AI Chip Crashes Nvidia's Party

#artificialintelligence

On Wednesday at its annual developers conference, the tech giant announced the second generation of its custom chip, the Tensor Processing Unit, optimized to run its deep learning algorithms. In contrast, Nvidia announced its latest generation GPUs in a data center product called the Tesla V100 that deliver 120 teraflops of performance, Nvidia said. Through the Google Cloud, anybody can rent out Cloud TPUs -- similar to how people can rent GPUs on the Google Cloud. "Google's use of TPUs for training is probably fine for a few workloads for the here and now, but given the rapid change in machine learning frameworks, sophistication, and depth, I believe Google is still doing much of their machine learning production and research training on GPUs," said tech analyst Patrick Moorhead.


IBM Bridge and Tunnel Investor

#artificialintelligence

The new systems tap the Nvidia NVLink technology to move data five times faster than any competing platform, said Stefanie Chiras, an IBM vice president, in an interview with VentureBeat. Collaboratively developed with a variety of tech companies, the new Power Systems target A.I., deep learning, high performance data analytics, and other compute-heavy workloads, which can help businesses and cloud service providers save money on data center costs. "The open and collaborative model of the OpenPower Foundation has propelled system innovation forward in a major way with the launch of the IBM Power System S822LC for high-performance computing," said Ian Buck, vice president of accelerated computing at Nvidia, in a statement. The two additional LC servers available today -- the IBM Power System S821LC and the IBM Power System S822LC for Big Data -- can also leverage GPU acceleration technology to increase system performance levels on a variety of accelerated applications.


GPU-Accelerated Microsoft Azure

#artificialintelligence

Powered by NVIDIA Tesla P100 GPUs and NVIDIA's NVLink high speed multi-GPU interconnect technology, the HGX-1 comes as AI workloads – from autonomous driving and personalized healthcare to superhuman voice recognition -- are taking off in the cloud. Powered by eight NVIDIA Tesla P100 GPUs in each chassis, it features an innovative switching design – based on NVIDIA NVLink interconnect technology and the PCIe standard – enabling a CPU to dynamically connect to any number of GPUs. This allows cloud service providers that standardize on a single HGX-1 infrastructure to offer customers a range of CPU and GPU machine instance configurations to meet virtually any workload. The HGX-1 Hyperscale GPU Accelerator reference design is highly modular, allowing it to be configured in a variety of ways to optimize performance for different workloads.


NVIDIA and Microsoft Boost AI Cloud Computing with Launch of Industry-Standard Hyperscale GPU Accelerator

#artificialintelligence

Providing hyperscale data centers with a fast, flexible path for AI, the new HGX-1 hyperscale GPU accelerator is an open-source design released in conjunction with Microsoft's Project Olympus. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing." NVIDIA Joins Open Compute Project NVIDIA is joining the Open Compute Project to help drive AI and innovation in the data center. Certain statements in this press release including, but not limited to, statements as to: the performance, impact and benefits of the HGX-1 hyperscale GPU accelerator; and NVIDIA joining the Open Compute Project are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.


Nvidia's New TX2 Board Does Dual 4K-Camera Object-Detection in Real Time Make:

#artificialintelligence

Nvidia has lead much of this with their TK1 and TX1 modules; now, with the new release of the Jetson TX2, the AI capabilities we have access to have just doubled. Nvidia is focusing the high-performance, low-power board on "AI at the edge" -- building artificial intelligence processing capabilities into products directly, rather than sending data to cloud-based super computers. For the release event, Nvidia brought 18 demos from various groups to show how they're using TX architecture, including an onstage demo from Cisco of their new Spark Board collaboration tool. A large flatscreen display, it connected the Cisco rep with his companion Cisco team in Norway, who then showed a new camera device for the Spark Board that automatically recognizes and labels the names of the people in a conference, and automatically crops in to smaller groups to eliminate that "large empty conference room" aesthetic that we've become accustomed to with teleconferencing.


Microsoft And NVIDIA Announce HGX-1 Platform Standard For AI/ML Cloud Computing

Forbes

NVIDIA, Microsoft, and Ingrasys, a subsidiary of Foxconn) announced today their plans for HGX-1, a hyperscale GPU accelerator for AI and cloud computing. This open-source design is being released in conjunction with Microsoft's Project Olympus initiative at the OCP (Open Compute Project) conference and is designed to give hyperscale datacenters a high performance and flexible path for the machine learning industry. The compute model is changing, and machine learning training currently favors GPU computing--NVIDIA's bread and butter. NVIDIA is drawing comparisons between the HGX-1 (and what it hopes to do for cloud-based AI workloads), with what ATX (Advanced Technology eXtended) accomplished for PC motherboards back in 1995.


Press 1 to Learn How AI Could Fix Call Centers NVIDIA Blog

#artificialintelligence

The company, Deepgram, created technology that businesses can use to quickly assess customer calls to improve service. Businesses that increase the number of customer calls that get resolved on the first try by just 1 percent save an average of $276,000 a year, according to a study by SQM Group, a customer service consulting firm. Deepgram's deep learning software allows businesses to check the quality of service on every call. One Deepgram customer increased revenue by 3 percent by using the company's technology, he said.


This Data Center is Designed for Deep Learning

#artificialintelligence

The explosion of deep learning software development (deep learning is the most widespread machine learning technique) is driving a growing need for specialized computing infrastructure, geared for the types of workloads required to train neural nets. Cirrascale's data center is designed to provide power densities north of 30 kW per rack (power density in an ordinary enterprise data center is 3 to 5 kW per rack, rarely exceeding 10 kW). Hyperscale cloud operators like Google and Facebook are applying it in many of their user-facing features, but most of the companies working in the field are still in development stages, and that's true for the majority of Cirrascale's cloud customers, who are writing algorithms and learning to scale their deep learning applications to handle larger data sets. Instead of using still images to train neural nets to identify objects, the dominant approach, twentybn uses video.


Nvidia, Baidu partner to develop AI powered autonomous vehicle platform ZDNet

AITopics Original Links

Nvidia CEO Jen-Hsun Huang said the partnership illustrates the commitment both companies have made to advancing the use-cases of AI. The partnership combines Nvidia's self-driving computing platform with Baidu's cloud and mapping technology to develop an algorithm-based operating system capable of powering complex navigation systems in autonomous vehicles. Nvidia CEO Jen-Hsun Huang said the partnership illustrates the commitment both companies have made to advancing the use-cases of AI. "Baidu has already built a strong team in Silicon Valley to develop autonomous driving technologies, and being able to do road tests will greatly accelerate our progress," said Wang Jing, general manager of Baidu's Autonomous Driving Unit, in a statement.


NVIDIA - THE NEW FACE OF COMPUTING

#artificialintelligence

But behind the scenes of smartphones, virtual and mixed reality devices, PCs and cloud computing, GPU(Graphics Processing Unit) technologies have become the new face of computing, that's going to transform computer technology as we know it today. Today, NVIDIA's GPUs is accelerating the development of driverless cars, AI assistants and devices from IoT to powerful supercomputers. The Titan Cray Super Computer NVIDIA through its innovative technologies has helped to make modern computing what it is today, and the company keeps innovating with blazing fast smartphone processors, car automation systems and more powerful GPUs. It's no doubt anymore that at the heart of every modern computer innovation, from personal devices to cloud computing, artificial intelligence and supercomputers, GPUs are accelerating the pace of technology.