Collaborating Authors


New MLPerf Data Shows Competition Increases In AI, But NVIDIA Still Leads


Today was the release of the second round (version 0.7) of MLPerf Inference benchmark results. Like the latest training results, which were announced in July, the new inference numbers show an increase in the number of companies submitting and an increased number of platforms and workloads supported. The MLPerf inference numbers are segmented out into four categories – Data Center, Edge, Mobile, and Notebook. The number of submissions increased from 43 to 327 and the number of companies submitting increased from just nine to 21. The companies submitting included semiconductor companies, device OEMs, and several test labs.

Setting up your Nvidia GPU for Deep Learning(2020)


This article aims to help anyone who wants to set up their windows machine for deep learning. Although setting up your GPU for deep learning is slightly complex the performance gain is well worth it * . The steps I have taken taken to get my RTX 2060 ready for deep learning is explained in detail. The first step when you search for the files to download is to look at what version of Cuda that Tensorflow supports which can be checked here, at the time of writing this article it supports Cuda 10.1.To download cuDNN you will have to register as an Nvidia developer. I have provided the download links to all the software to be installed below.

Nvidia Dominates (Again) Latest MLPerf Inference Results


One wonders where the rest of AI accelerator crowd is? (Cerebras (CS-1), AMD (Radeon), Groq (Tensor Streaming Processor), SambaNova (Reconfigurable Dataflow Unit), Google's (TPU) et. For the moment, Nvidia rules the MLPerf roost. It posted the top performances in categories in which it participated, dominating the'closed' datacenter and closed edge categories. MLPerf's closed categories impose system/network restrictions intended to ensure apples-to-apples comparisons among participating systems. The'open' versions of categories permit customization.

Global Deep Learning Market To Show Startling Growth During Forecast Period 2020–2026 – Zion Market Research - re:Jerusalem


The global Deep Learning market is expected to rise with an impressive CAGR and generate the highest revenue by 2026. Zion Market Research in its latest report published this information. The report is titled "Global Deep Learning Market 2020 With Top Countries Data, Revenue, Key Developments, SWOT Study, COVID-19 impact Analysis, Growth and Outlook To 2026". It also offers an exclusive insight into various details such as revenues, market share, strategies, growth rate, product & their pricing by region/country for all major companies. The report provides a 360-degree overview of the market, listing various factors restricting, propelling, and obstructing the market in the forecast duration. The report also provides additional information such as interesting insights, key industry developments, detailed segmentation of the market, list of prominent players operating in the market, and other Deep Learning market trends.

The Full Nerd ep. 154: Big Navi rumors, Nvidia Adobe AI, PS5 teardown


In this episode of The Full Nerd, Gordon Ung, Alaina Yee, and Adam Patrick Murray dive into the supposed leaks surrounding AMD's upcoming RDNA2 ("Big Navi") video cards, Nvidia's new Smart Portrait filter for Photoshop, and Sony's teardown of its PlayStation 5 console. Big Navi might be huge--a supposed leak, revealed on Twitter, teases a top-tier card with monstrous specs. Gordon picks apart the nuances of the numbers and puts it into perspective against Nvidia's RTX 30-series cards. Equally exciting for photographers is Nvidia's AI-powered tool for Photoshop, which will allow faster adjustments to gaze direction and lighting angles. If you've ever taken a group shot and had to deal with that one person looking at the wrong camera, you'll understand Gordon's enthusiasm for this new filter.

NVIDIA Shatters Inference Benchmarks


The key to these cutting-edge vehicles is inference -- the process of running AI models in real time to extract insights from enormous amounts of data. And when it comes to in-vehicle inference, NVIDIA Xavier has been proven the best -- and the only -- platform capable of real-world AI processing, yet again. NVIDIA GPUs smashed performance records across AI inference in data center and edge computing systems in the latest round of MLPerf benchmarks, the only consortium-based and peer-reviewed inference performance tests. NVIDIA Xavier extended its performance leadership demonstrated in the first AI inference tests, held last year, while supporting all new use cases added for energy-efficient, edge compute SoC. Inferencing for intelligent vehicles is a full-stack problem.

Chip industry is going to need a lot more software to catch Nvidia's lead in AI


Anil Mankar, head of product development at AI chip startup BrainChip, presented details on the company's technology Tuesday at the prestigious Linley Fall Processor conference. The conference organizer, Linley Gwennap, presented the case that the entire industry needs more software capability to catch up with an enormous lead that Nvidia has in AI. The semiconductor industry is in the midst of a renaissance in chip design and performance improvement, but it will take a lot more software to catch up with graphics chip titan Nvidia, an industry conference Tuesday made clear. The Linley Fall Processor conference, which is taking place as a virtual event this week and next week, is one of the main meet-and-greet events every year for promising young chip companies. To kick off the show, the conference host, Linley Gwennap, who has been a semiconductor analyst for two decades, offered a keynote Tuesday morning in which he said that software remains the stumbling block for all companies that want to challenge Nvidia's lead in processing artificial intelligence.

Nvidia makes a clean sweep of MLPerf predictions benchmark for artificial intelligence


Graphics chip giant Nvidia mopped up the floor with its competition in a benchmark set of tests released Wednesday afternoon, demonstrating better performance on a host of artificial intelligence tasks. The benchmark, called MLPerf, announced by the MLPerf organization, an industry consortium that administers the tests, showed Nvidia getting better speed on a variety of tasks that use neural networks, from categorizing images to recommending which products a person might like. Predictions are the part of AI where a trained neural network produces output on real data, as opposed to the training phase when the neural network system is first being refined. Benchmark results on training tasks were announced by MLPerf back in July. Many of the scores on the test results pertain to Nvidia's T4 chip that has been in the market for some time, but even more impressive results were reported for its A100 chips unveiled in May.

Lenovo Smart Clock Essential review: Basic doesn't mean bad


One of our favorite gadgets from 2019 was the Google-powered Lenovo Smart Clock. It doesn't have all the bells and whistles of a typical Google smart display, but its alarm clock features, affordable price point and small form factor more than make up for it. Recently, however, the company debuted an even simpler version of the device, appropriately called the Lenovo Smart Clock Essential. With the Essential, the pretense of a smart display is gone altogether; the LCD screen has been replaced with a basic LED display. As a result, I don't quite like it as much as the original Lenovo Smart Clock, but it's also $30 cheaper (the Essential retails for $50 while the original Smart Clock is $80) and if all you really want is an alarm clock with some Google Assistant smarts, then the Essential certainly fits the bill. At its core, the Lenovo Smart Clock Essential is simply a Google-powered smart speaker with a built-in alarm clock.

Deconstructing Maxine, Nvidia's AI-powered video-conferencing technology


This article is part of "Deconstructing artificial intelligence," a series of posts that explore the details of how AI applications work. One of the things that caught my eye at Nvidia's flagship event, the GPU Technology Conference (GTC), was Maxine, a platform that leverages artificial intelligence to improve the quality and experience of video-conferencing applications in real-time. Maxine used deep learning for resolution improvement, background noise reduction, video compression, face alignment, and real-time translation and transcription. In this post, which marks the first installation of our "deconstructing artificial intelligence" series, we will take a look at how some of these features work and how they tie-in with AI research done at Nvidia. We'll also explore the pending issues and the possible business model for Nvidia's AI-powered video-conferencing platform.