Hardware


r/deeplearning - Do you need a lot of resources to utilize the network you trained?

#artificialintelligence

This is a pretty active area of research, namely "edge device computing" which often intertwines with "model compression". Using embedded devices that have GPUs such as the Nvidia Jetson TX2 is often a good place to start. This way you can use a smaller GPU that offers CUDA support in an embedded setting. However you must make sure your models are small enough to fit on a device with compute limitations. Frameworks like Tensorflow can train models on a GPU and then you can save the weights, then perform inference elsewhere on a CPU, perhaps you can do something like this on a raspberry pi but keep in mind you will be severly limited on such a device.


Computer monitors on sale this weekend: Shop deals on Dell, Lenovo, LG, and more

Mashable

Whether you use a monitor for gaming, editing photos, or just doing work, it doesn't hurt to treat yourself to an upgrade. Best to do so when you can take advantage of some awesome sale prices. We found six monitors on sale this weekend, ranging in size and price. As you're looking, keep in mind the refresh rate, screen size, and display technology. This monitor has a 75Hz refresh rate, which means it's awesome for gaming.


Lenovo Smart Tab review: A hybrid smart display that lives up to the hype

Engadget

Tablets often go unused, and when you do want to play with it, it's probably dead because you forgot to charge it. At least, that's what happens to me. Lenovo, however, has come up with a potential solution to the neglected tablet dilemma: the Smart Tab. It was a finalist for Best of CES this year in the smart home category, and here's why: You can place it inside a companion Bluetooth speaker dock, at which point it becomes an Alexa-powered smart display similar to Amazon's Echo Show. This way, when the Smart Tab is not being used as a tablet, it still has a purpose as a smart display.


GauGAN Turns Doodles into Stunning, Realistic Landscapes NVIDIA Blog

#artificialintelligence

A novice painter might set brush to canvas aiming to create a stunning sunset landscape -- craggy, snow-covered peaks reflected in a glassy lake -- only to end up with something that looks more like a multi-colored inkblot. But a deep learning model developed by NVIDIA Research can do just the opposite: it turns rough doodles into photorealistic masterpieces with breathtaking ease. The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images. The interactive app using the model, in a lighthearted nod to the post-Impressionist painter, has been christened GauGAN. GauGAN could offer a powerful tool for creating virtual worlds to everyone from architects and urban planners to landscape designers and game developers.


Nvidia announces $99 AI computer for developers, makers, and researchers

#artificialintelligence

In recent years, advances in AI have produced algorithms for everything from image recognition to instantaneous translation. But when it comes to applying these advances in the real world, we're only just getting started. A new product from Nvidia announced today at GTC -- a $99 AI computer called the Jetson Nano -- should help speed that process. The Nano is the latest in Nvidia's line of Jetson embedded computing boards, used to provide the brains for robots and other AI-powered devices. Plug one of these into your latest creation, and it'll be able to handle tasks like object recognition and autonomous navigation without relying on cloud processing power.


'Anthem' patch taps into NVIDIA's AI-powered antialiasing

Engadget

A patch to Anthem released on Tuesday that will allow for faster performance as well as some added features. The update includes NVIDIA Deep Learning Super Sampling (DLSS) and NVIDIA Highlights. The game developer claims that Anthem players will see up to 40 percent faster performance with DLSS. DLSS uses AI to continually make the game more efficient, and automatically delivers updated algorithms to your machine. Also new in the patch is NVIDIA Highlights, which automatically captures screenshots and game clips when players achieve certain milestones, such as defeating a large creature or discovering the Tombs of the Legionairres.


BioWare's Anthem adds DLSS support as Nvidia's RTX push continues

PCWorld

BioWare's Anthem makes one hell of a first impression, and today, it got even stronger--at least if you're a gaming geek who fetishizes ultra-fast frame rates. An update rolled out today that adds support for Nvidia's Deep Learning Super Sampling (DLSS) technology, which uses machine learning and the dedicated tensor cores inside GeForce RTX graphics cards to make your games play faster. Anthem's level-up comes hot on the heels of RTX features debuting in games like Shadow of the Tomb Raider and Metro Exodus. Those two games, along with Battlefield V, pair DLSS with real-time ray tracing features to counteract the latter's performance hit. Anthem joins Final Fantasy XV in offering DLSS alone to supercharge frame rates--by up to a whopping 40 percent, Nvidia claims.


Apple unveils TV subscription service with help from Oprah Winfrey

The Guardian

Apple unveiled a host of new subscription services at a star-studded event in Cupertino, California, on Monday morning. The event marked the debut of a new era for a company that built its brand on hardware and software; just last week, Apple announced new products with little fanfare, saving its firepower for Monday's celebration of services, from its attempt to take on Netflix to a new Apple credit card. Steven Spielberg, Reese Witherspoon, Jennifer Aniston, Steve Carrell, Kumail Nanjiani, and Big Bird were on hand to promote new creative projects that will be released through Apple's new subscription television service, Apple TV . Spielberg's Amazing Stories will resurrect the 93-year-old brand of a science fiction magazine that inspired the director as a child. Witherspoon and Aniston announced The Morning Show, described by Aniston as "an honest look at the complex relationship between women and men in the workplace".


Nvidia GPUs for data science, analytics, and distributed machine learning using Python with Dask

ZDNet

Nvidia has been more than a hardware company for a long time. As its GPUs are broadly used to run machine learning workloads, machine learning has become a key priority for Nvidia. In its GTC event this week, Nvidia made a number of related points, aiming to build on machine learning and extend to data science and analytics. Nvidia wants to "couple software and hardware to deliver the advances in computing power needed to transform data into insights and intelligence." Jensen Huang, Nvidia CEO, emphasized the collaborative aspect between chip architecture, systems, algorithms and applications.


Nvidia unveils incredible 'smart paintbrush' software that uses AI to turn simple doodles into art

Daily Mail

A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. The new program, dubbed GauGAN, after famous French impressionist Paul Gaugin, uses a tool called generative adversarial networks (GAN) to interpret simple lines and convert them into hyper-realistic images. Its application could help professionals across a range of disciplines such as architecture and urban planning render images and visualizations faster and with greater accuracy, according to the company. A new piece of software developed by American tech company, NVIDIA, uses deep-learning to elevate even the roughest sketches into works of art. Simple shapes become mountains and lakes with just a stroke of what NVIDIA calls a'smart paintbrush' Artificial intelligence systems rely on neural networks, which try to simulate the way the brain works in order to learn.