Results


AI everywhere

#artificialintelligence

"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.


Artificial Intelligence, IBM, NVIDIA Driving Changes to Credit Cards, Health Care, Physical Security

#artificialintelligence

More important to me is how this will change our lives. I spent some time last week talking to IBM about how its partnership with NVIDIA and its advancements with Watson and OpenPOWER will be changing the world around us. We spoke about a number of artificial intelligence trends and several stood out for me. Artificial Intelligence and Credit Card Security Every year, financial institutions write of billions in losses due to credit card fraud, and a great deal of focus has been placed on stopping this steady drip, drip, drip of illegal cost. Currently, systems are advanced enough to do four fraud checks at the time of the transaction, but they simply aren't enough to stop the flood of people cloning, stealing and skimming credit cards to steal money.


AMD chases the AI trend with its Radeon Instinct GPUs for machine learning

PCWorld

With the Radeon Instinct line, AMD joins Nvidia and Intel in the race to put its chips into AI applications--specifically, machine learning for everything from self-driving cars to art. The company plans to launch three products under the new brand in 2017, which include chips from all three of its GPU families. The passively cooled Radeon Instinct MI6 will be based on the company's Polaris architecture. It will offer 5.7 teraflops of performance and 224GBps of memory bandwidth, and will consume up to 150 watts of power. The small-form-factor, Fiji-based Radeon Instinct MI8 will provide 8.2 teraflops of performance and 512GBps of memory bandwidth, and will consume up to 175 watts of power.


Azure N-Series: General availability on December 1

#artificialintelligence

I am really excited to announce that the general availability of the Azure N-Series will be December 1st, 2016. Azure N-Series virtual machines are powered by NVIDIA GPUs and provide customers and developers access to industry-leading accelerated computing and visualization experiences. I am also excited to announce global access to the sizes, with N-series available in South Central US, East US, West Europe and South East Asia, all available on December 1st. We've had thousands of customers participate in the N-Series preview since we launched it back in August. We've heard positive feedback on the enhanced performance and the work we have down with NVIDIA to make this a completely turnkey experience for you.


Intel shares artificial intelligence strategy

#artificialintelligence

Intel announced a slew of products, technologies and investment in an effort to fix its position in the field of artificial intelligence. In the new move, Intel has assembled a set of technology options to drive AI capabilities in everything from smart factories and drones to sports, fraud detection and autonomous cars. Intel is increasing its focus on AI as it believes it can power the AI products released recently by companies like Facebook and Google. In a blog Intel CEO Brian Krzanich had said, "Intel is uniquely capable of enabling and accelerating the promise of AI. Intel is committed to AI and is making major investments in technology and developer resources to advance AI for business and society."


UK Startup Takes On GPUs with Neural Network Accelerator

#artificialintelligence

AI startup Graphcore has emerged from stealth mode with the announcement of $30 million in initial Series A funding. The Bristol, UK-based company will use the cash infusion to complete development of its Intelligent Processing Unit (IPU), a custom-built chip aimed at machine learning workloads. The funding was led by Robert Bosch Venture Capital GmbH and Samsung Catalyst Fund; also joining were Amadeus Capital Partners, C4 Ventures, Draper Esprit plc, Foundation Capital and Pitango Venture Capital. The IPU has been under development at Graphcore for two years, with the first product slated to be released in the second half of 2017. It's designed to work across a range of machine learning application and is applicable to both training and inferencing neural networks.


GPUs Reshape Computing

Communications of the ACM

NVidia's Titan X graphics card, featuring the company's Pascal-powered graphics processing unit driven by 3,584 CUDA cores running at 1.5GHz. As researchers continue to push the boundaries of neural networks and deep learning--particularly in speech recognition and natural language processing, image and pattern recognition, text and data analytics, and other complex areas--they are constantly on the lookout for new and better ways to extend and expand computing capabilities. For decades, the gold standard has been high-performance computing (HPC) clusters, which toss huge amounts of processing power at problems--albeit at a prohibitively high cost. This approach has helped fuel advances across a wide swath of fields, including weather forecasting, financial services, and energy exploration. However, in 2012, a new method emerged.


Intel Launches 'Knights Landing' Phi Family for HPC, Machine Learning

#artificialintelligence

From ISC 2016 in Frankfurt, Germany, this week, Intel Corp. launched the second-generation Xeon Phi product family, formerly code-named Knights Landing, aimed at HPC and machine learning workloads. "We're not just a specialized programming model," said Intel's General Manager, HPC Compute and Networking, Barry Davis in a hand-on technical demo held at ISC. "Knights Landing" also puts integrated on-package memory in a processor, which benefits memory bandwidth and overall application performance. The Pascal P100 GPU for NVLink-optimized servers offers 5.3 teraflops of double-precision floating point performance, and the PCIe version supports 4.7 teraflops of double-precision.


GPUs and Deep Learning

#artificialintelligence

Deep learning, or deep neural nets (DNNs), is the technical craze these days. It is targeting everything from self-driving cars to tagging photos. DNNs are just one of many artificial intelligence (AI) research areas. It has become more popular as processor performance has increased, allowing more complex systems. DNNs require matrix number-crunching capabilities found in FPGAs and GPUs.


NVIDIA Corporation (NASDAQ:NVDA) - NVIDIA Q1'16 Earnings Conference Call: Full Transcript

#artificialintelligence

With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. We also extended our VR platform by adding special kits to our VR work software development kit that helps to provide an even greater sense of presence with NVR. The P100 utilizes a combination of technologies including NVLink, our high speed interconnect to learning application performance to scale on multiple GPUs, primarily bandwidth and multiple hardcore features design to natively accelerate AI applications. Universities hyperscale vendors and large enterprises developing AI based applications are showing strong interest in the system.