Results


The Audi A8: the World's First Production Car to Achieve Level 3 Autonomy

#artificialintelligence

The 2018 Audi A8, just unveiled in Barcelona, counts as the world's first production car to offer Level 3 autonomy. Here that involves driving no faster than 60 kilometers per hour (37 mph), which is why Audi calls the feature AI Traffic Jam Pilot. When the car up ahead stops, the A8's AI hits the brakes in time to avoid rear-ending it. Audi said in a statement that it will follow "a step-by-step approach" to introducing the traffic jam pilot.


NVIDIA and Microsoft Boost AI Cloud Computing with Launch of Industry-Standard Hyperscale GPU Accelerator

#artificialintelligence

Providing hyperscale data centers with a fast, flexible path for AI, the new HGX-1 hyperscale GPU accelerator is an open-source design released in conjunction with Microsoft's Project Olympus. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing." NVIDIA Joins Open Compute Project NVIDIA is joining the Open Compute Project to help drive AI and innovation in the data center. Certain statements in this press release including, but not limited to, statements as to: the performance, impact and benefits of the HGX-1 hyperscale GPU accelerator; and NVIDIA joining the Open Compute Project are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.


Facebook, Microsoft target faster services with new AI server designs

PCWorld

Both companies introduced new open-source hardware designs to ensure faster responses to such artificial intelligence services, and the designs will allow the companies to offer more services via their networks and software. Big Basin delivers on the promise of decoupling processing, storage, and networking units in data centers. Facebook's Big Basin system has eight Nvidia Tesla P100 GPU accelerators, connected in a mesh architecture via the super-fast NVLink interconnect. Microsoft's new server design has a universal motherboard slot that will support the latest server chips, including Intel's Skylake and AMD's Naples.


With Big Basin, Facebook Beefs Up its AI Hardware

#artificialintelligence

It's a beefier successor to Big Sur, the first-generation Facebook AI server unveiled last July. "With Big Basin, we can train machine learning models that are 30 percent larger because of the availability of greater arithmetic throughput and a memory increase from 12 GB to 16 GB," said Kevin Lee, a Technical Program Manager at Facebook. With this hardware, Facebook can train its machine learning systems to recognize speech, understand the content of video and images, and translate content from one language to another. Facebook has been designing its own hardware for many years, and In preparing to upgrade Big Sur, the Facebook engineering team gathered feedback from colleagues in Applied Machine Learning (AML), Facebook AI Research (FAIR), and infrastructure teams.


Nvidia's Jetson TX2 makes AI computing possible within cameras, sensors and more

#artificialintelligence

Nvidia has a new generation of its Jetson embedded computing platform for devices at the edge of a network, including things like traffic cameras, manufacturing robotics, smart sensors and more. It makes it possible to push edge-of-network computing even further, allowing for the running of distributed neural networks right on edge devices that can more accurately do things like identify objects in images, recognize speech or interpret surroundings for autonomous navigation. Cisco say that it can use TX2 and Nvidia's Jetson to add local AI-powered features including face and speech recognition to its Spark enterprise network devices, which could potentially offer a lot of advantages in terms of security and authentication. Nvidia's shipping TX2 module will retail for $399 when it arrives in Q2, and the existing TX1 and TK1 Jetson embedded computing platforms will also continue to be made available, at reduced prices.


DEEP LEARNING PLATFORMS & GPUS: AN INTERVIEW WITH BRYAN CATANZARO

#artificialintelligence

Training and deploying state of the art deep neural networks is very computationally intensive, and, while modern GPUs offer high density computation, researchers need more than a fast processor -- they also need optimized libraries, and tools to efficiently program so that they can experiment with new ideas. Bryan Catanzaro, VP of Applied Deep Learning Research at NVIDIA, joined us at the 2017 Deep Learning Summit in San Francisco, to share expertise on GPUs and platforms for deep learning, as well as giving insights on the latest deep learning developments at NVIDIA. Modern AI applications really gained a foothold at internet companies like Google, Baidu, and Facebook because they have hundreds of millions of users generating incredible data through their behaviour online, which enabled bootstrapping AI from research into real products. See the full events list here for summits and dinners focused on AI, Deep Learning and Machine Intelligence taking place in San Francisco, London, Amsterdam, San Francisco, Boston, New York, Singapore, Hong Kong, and Montreal!


Chipmakers Get Serious About Autonomous Driving At CES 2017

Forbes

This GO brand describes the company's compute platform for autonomous driving capabilities, designed to be paired with their connectivity solutions. Intel's datacenter group also has the capability enable autonomous vehicle manufacturers to do their training in their datacenters using Intel technology as well. NVIDIA's approach to autonomous driving is more focused on the artificial intelligence (AI) and machine learning aspects of autonomous driving. While Intel has an end-to-end solution, NVIDIA offers their own unique approach that uses GPUs in the datacenter to do training and Tegra chips and GPUs in the car with their Drive PX2 to do inference.


This tiny supercomputer is all the rage

#artificialintelligence

But because the relevant parts and programs are preinstalled in a metal enclosure about the size of a medium suitcase, and since it pairs advanced hardware with fast connectivity, Nvidia claims the DGX-1 is easier to set up and quicker at analyzing data than previous GPU systems. Jackie Hunter, CEO of London-based BenevolentAI's life sciences arm, BenevolentBio, says her data science team had models training on the DGX-1 the same day it was installed. "If you're incorporating not just x-rays, but a whole host of clinical information, billing information, and social media feeds as indicators of a patient's health, you really do need large amounts of GPU computing power to crush that," says center director Mark Michalski. Fidelity Labs, the R&D arm of Fidelity Investments, also owns two DGX-1s and plans to use them to build neural networks or computer systems modeled on the human brain, says labs director Sean Belka.


Artificial Intelligence & Machine Learning: Top 100 Influencers and Brands

#artificialintelligence

In 2014 Google bought artificial intelligence startup DeepMind for $400 million (£263 million), making it one of the largest tech acquisitions to date. So we analysed 1.1M tweets from 30 Novemver 2015 – 24 February 2016 mentioning the keywords "#AI OR "Artificial Intelligence" OR ArtificialIntelligence OR "Machine Learning" OR Machinelearning" and identified the top 100 most influential brands and individuals leading the discussion on Twitter. Below you can see a network map of the top 100 engaged users in the Artificial Intelligence and Machine Learning conversation. Below you can see another network map created with our Influencer Relationship Management software (IRM) showing the #2 Influencer Kirk Borne, and the conversations to and from the different influencers in his field.


NVIDIA and Microsoft Accelerate AI Together

#artificialintelligence

This jointly optimized platform runs the new Microsoft Cognitive Toolkit (formerly CNTK) on NVIDIA GPUs, including the NVIDIA DGX-1 supercomputer, which uses Pascal architecture GPUs with NVLink interconnect technology, and on Azure N-Series virtual machines, currently in preview. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Certain statements in this press release including, but not limited to the impact and benefits of NVIDIA's and Microsoft's AI acceleration collaboration, Tesla GPUs, DGX-1, the Pascal architecture, NVLink interconnect technology and the Microsoft Cognitive Toolkit; the availability of Azure N-Series virtual machines; and the continuation of NVIDIA's and Microsoft's collaboration are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.