Results


Proposition: No speed limit on NVIDIA Volta with rise of AI - IBM Systems Blog: In the Making

#artificialintelligence

We're excited about the launch of NVIDIA's Volta GPU accelerators.Together with the NVIDIA NVLink "information superhighway" at the core of our IBM Power Systems, it provides what we believe to be the closest thing to an unbounded platform for those working in machine learning and deep learning and those dealing with very large data sets. Servers with POWER9 and Volta, with its second-generation NVIDIA NVLink, PCI-Express 4, and Memory Coherence technologies, and unprecedented internal bandwidth, will blow people away. Our IBM and NVIDIA partnership around these new technologies will surface for the first time in the U.S. Department of Energy Summit Supercomputer at the Oak Ridge National Laboratory and the Sierra Supercomputer at the Lawrence Livermore National Laboratory, which are pushing the boundaries of big data science and simulation. AI applications data distributed deep learning gpu hardware IBM Cognitive Systems ibm power systems ibm power9 ibm powerai ibm research Lawrence Livermore National Laboratory Memory Coherence NVIDIA NVIDIA NVLink NVIDIA Volta Oak Ridge National Laboratory PCI-Express 4 power systems power9 Power9 chip powerai Sierra Supercomputer software Summit Supercomputer U.S. Department of Energy Your email address will not be published.Required fields are marked *


What the CIA's Tech Director Wants from AI

#artificialintelligence

Dawn Meyerriecks is less worried about rival nation states might use AI to outflank the United States than about getting U.S. leaders to believe what AI is telling them. What do I need in order to make a really good assessment on the back-end because that tells me what sort of collection I need to raise confidence to go address national leadership?" Consider, on Monday, Kremlin-backed news site Sputnik ran a story about a new Russian supercomputer for deep learning, part of a "conversational" AI project called iPavlov. "The computing power is fundamentally important for deep learning," Mikhail Burtsev of the Moscow Institute of Physics and Technology told Sputnik.


Driverless cars: Tim Cook says Apple AI is applicable to more than just cars

#artificialintelligence

The firms have established a startup support programme at Volkswagen's Data Lab to provide technical and financial support for international startups developing machine learning and deep learning applications for the automotive industry. Volvo Cars, Autoliv and Zenuity will use Nvidia's AI car computing platform as the foundation for their own advanced software development. Nvidia has partnered with automotive supplier ZF and camera perception software supplier Hella to deploy AI technology on the New Car Assessment Program (NCAP) safety certification for the mass deployment of self-driving vehicles. The firms will use Nvidia's Drive AI platform to develop software for scalable modern driver assistance systems that connect their advanced imaging and radar sensor technologies to autonomous driving functionality.


NVIDIA Supercharges Rendering Performance with AI

#artificialintelligence

Running OptiX 5.0 on the NVIDIA DGX Station -- the company's recently introduced deskside AI workstation -- will give designers, artists and other content-creation professionals the rendering capability of 150 standard CPU-based servers. To achieve equivalent rendering performance of a DGX Station, content creators would need access to a render farm with more than 150 servers that require some 200 kilowatts of power, compared with 1.5 kilowatts for a DGX Station. Certain statements in this press release including, but not limited to, statements as to: the impact, benefits, performance and availability of NVIDIA OptiX 5.0 SDK and the NVIDIA DGX Station; AI transforming industries and having the potential to turbocharge the creative process are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners' products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-Q for the fiscal period ended April 30, 2017.


NVIDIA Describes AI's Critical Role in Self-Driving Cars to Key Senate Committee The Official NVIDIA Blog

#artificialintelligence

In testimony before a packed hearing of the U.S. Senate Committee on Commerce, Science and Transportation, Rob Csongor, vice president and general manager of the company's Automotive business, said that AI will in the years ahead enable self-driving cars that save tens of thousands of lives, provide mobility to the disabled, improve urban design and save vast amounts of unproductive time. "Our technology is being used by more than 225 automotive companies worldwide, including Audi, Tesla, Toyota, Volvo, Mercedes and others. Other panel members included Mitch Bainwol, president and CEO of the Alliance of Automotive Manufacturers, an association of major international automakers; John Maddox, president and CEO of the American Center for Mobility, a car testing and product-development association; and Colleen Sheehy-Church, national president of Mothers Against Drunk Driving. NVIDIA's DRIVE PX AI computing platform for vehicles enables cars to locate themselves on highly precise maps, detect nearby objects and plot a safe path forward through varying conditions.


NVIDIA helps the US build an AI for cancer research

Engadget

Microsoft isn't the only big-name tech company using AI to fight cancer. NVIDIA is partnering with the US Department of Energy and the National Cancer Institute to develop CANDLE (Cancer Distributed Learning Environment), an AI-based "common discovery platform" that aims for 10 times faster cancer research on modern supercomputers with graphics processors. The AI will also automatically extract and study "millions" of patient records to understand how cancer spreads and reoccurs, and accelerate the simulation of protein interactions to see how they create the conditions for cancer. As with other AI-based medical research (including Microsoft's), its effectiveness still depends on humans -- they have to ask the right questions and collect the right data.


NVIDIA Teams with National Cancer Institute, U.S. Department of Energy to Create AI Platform for Accelerating Cancer Research

#artificialintelligence

SANTA CLARA, CA--(Marketwired - Nov 14, 2016) - NVIDIA (NASDAQ: NVDA) today announced that it is teaming up with the National Cancer Institute, the U.S. Department of Energy (DOE) and several national laboratories on an initiative to accelerate cancer research. Teams collaborating on CANDLE include researchers at the National Cancer Institute (NCI), Frederick National Laboratory for Cancer Research and DOE, as well as at Argonne, Oak Ridge, Livermore and Los Alamos National Laboratories. Georgia Tourassi, Director of the Health Data Sciences Institute at Oak Ridge National Laboratory, said, "Today cancer surveillance relies on manual analysis of clinical reports to extract important biomarkers of cancer progression and outcomes. Certain statements in this press release including, but not limited to, statements regarding the impact, benefits and goals of the Cancer Moonshot, the CANDLE AI framework, the combination of NVLink-enabled Pascal GPU architectures, and NVIDIA DGX-1; NVIDIA's participation in CANDLE; AI and deep learning techniques being essential to achieve the Cancer Moonshot objectives; expected gains in training neural networks for cancer research; large-scale data analytics and deep learning being central to Lawrence Livermore National Laboratory's missions; NVIDIA being at the forefront of accelerated machine learning; and CORAL/Sierra architectures being critical to developing scalable deep learning algorithms are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.


Major AI Conference GTC DC Set for Washington NVIDIA Blog

#artificialintelligence

Hard on the heels of the White House's new report about artificial intelligence's potential to address major societal challenges, NVIDIA will be holding Washington's largest ever AI conference. The timing is right for policymakers and AI experts to meet, following yesterday's White House report concluding that "AI has the potential to help address some of the biggest challenges that society faces" and recommending increased funding in AI research to fuel economic growth. The report, Preparing for the Future of Artificial Intelligence, published by the National Science and Technology Council, notes that "the effectiveness of government itself is being increased as agencies build their capacity to use AI to carry out their missions more quickly, responsively and efficiently." One of the report's recommendations is to increase collaboration between government and industry on AI research: "AI can be a major driver of economic growth and social progress, if industry, civil society, government, and the public work together to support development of the technology …" A number of scientists and others who provided input on the report will be speaking at GTC DC, such as Jason Furman, chairman of the Joint Council of Economic Advisors, who will discuss AI's economic impact.


IBM Servers with Tesla P100 GPUs, NVLink an HPC Milestone NVIDIA Blog

#artificialintelligence

As a leader in server systems, IBM saw this trend coming several years ago, and partnered with us to accelerate new data center workloads. After four years of development, IBM today introduced its Power System S822LC for High Performance Computing powered by NVIDIA Tesla P100 GPUs and NVLink to facilitate high-performance analytics and enable deep learning on ever increasing mountains of data. This tight coupling of IBM and NVIDIA technology enables data to flow 5x faster than over PCIe, accelerating time to insight for many of today's most critical applications, like advanced analytics, deep learning and AI. IBM has already lined up several customers, including a large multinational corporation and a number of research organizations, including the U.S. Department of Energy's Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL).


IBM Linux Servers Designed to Accelerate Artificial Intelligence, Deep Learning and Advanced Analytics

#artificialintelligence

Collaboratively developed with some of the world's leading technology companies, the new Power Systems are uniquely designed to propel artificial intelligence, deep learning, high performance data analytics and other compute-heavy workloads, which can help businesses and cloud service providers save money on data center costs. "NVIDIA NVLink provides tight integration between the POWER CPU and NVIDIA Pascal GPUs and improved GPU-to-GPU link bandwidth to accelerate time to insight for many of today's most critical applications like advanced analytics, deep learning and AI." Among those first in line to receive shipments are a large multinational retail corporation and the U.S. Department of Energy's Oak Ridge National Laboratory (ORNL) and Lawrence Livermore National Laboratory (LLNL). Lower Costs, Less Server Sprawl Fully compatible in Linux-based cloud environments, IBM's Power LC servers are optimized for data-rich applications and can deliver superior data center efficiency.