If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
AI has been filling in the gaps for illustrators and photographers for years now -- literally, it intelligently fills gaps with visual content. But the latest tools are aimed at letting an AI give artists a hand from the earliest, blank-canvas stages of a piece. Nvidia's new Canvas tool lets the creator rough in a landscape like paint-by-numbers blobs, then fills it in with convincingly photorealistic (if not quite gallery-ready) content. Each distinct color represents a different type of feature: mountains, water, grass, ruins, etc. When colors are blobbed onto the canvas, the crude sketch is passed to a generative adversarial network. GANs essentially pass content back and forth between a creator AI that tries to make (in this case) a realistic image and a detector AI that evaluates how realistic that image is.
NVIDIA has launched a new app you can use to paint life-like landscape images -- even if you have zero artistic skills and a first grader can draw better than you. The new application is called Canvas, and it can turn childlike doodles and sketches into photorealistic landscape images in real time. It's now available for download as a free beta, though you can only use it if your machine is equipped with an NVIDIA RTX GPU. Canvas is powered by the GauGAN AI painting tool, which NVIDIA Research developed and trained using 5 million images. When the company first introduced GauGAN to the world, NVIDIA VP Bryan Catanzaro, described its technology as a "smart paintbrush."
Nvidia announced on Tuesday it has made generally available its Fleet Command service. The subscription service will allow businesses to rollout and manage AI applications at the edge. "Within minutes of installation, the platform lets administrators add or delete applications, update system software over the air, and monitor the health of devices spread across vast distances," the company said. Nvidia VP and GM of enterprise and edge computing Justin Biotano said Fleet Command can turn any Nvidia certified server into a "secure edge AI system". When combined with its recently launched Base Command product, Nvidia said it now offers an end-to-end process from AI model creation to production.
As massive amounts of data are stored every second, it allows for the opportunity to create meaningful and revolutionizing models. This data comes in several forms, including text, images and videos, all allowing for advanced models to be created using techniques such as Deep Learning. Further, using the extensive amount of data, applications using technologies such as computer vision are being used in products such as self-driving cars and facial recognition in phones. When creating a Deep Learning application, one of the first decisions to be made is where the model will be trained, either locally on a machine or through a third-party cloud provider. This is an important decision to be made as it could significantly impact the training time of a model.
In the war to prove who's better at high-resolution gaming performance, Nvidia on Monday added three more allies: Rust, Doom Eternal and Lego Builder's Journey are joining the more than 55 other games to support its DLSS technology. The company also said Linux gamers would soon get access to DLSS through Proton for Vulkan. DLSS, or Deep Learning Super Sampling, taps the AI Tensor cores on Nvidia's 2000- and 3000-series GPUs to render games at a lower resolution, with comparable visual quality when increased to a higher resolution. We've tried DLSS, and it's like black magic. The company said that starting June 22, Linux gamers can download the Nvidia Linux Driver and enable Proton by going into steam to get DLSS in such games as Doom Eternal, No Man's Sky and Wolfenstein Youngblood.
Today is record day for the 4-for-1 stock split of shares of Nvidia (NASDAQ:NVDA) stock. If you'd purchased and held NVDA stock from my June 9 recommendation at $700 a share, you'd be sitting with a 6.42% gain and the promised of four times as many shares come mid July. NVDA stock hit $610 in February this year and dipped to $500 in March. It hit an all-time high of $721 on June 14 and closed Friday at $745 per share. One cannot predict the stock movement after the split but Nvidia is one company that has the ability to scale and generate massive revenue in the coming months.
NVIDIA's Deep Learning Super Sampling (DLSS) is about to reach a host of big-name games -- and more titles that don't rely on Windows. The company has announced that Facepunch Studios' survival hit Rust is adding DLSS support on July 1st. That's on top of a slew of already-revealed major titles receiving DLSS, including Doom Eternal (which also gets ray-traced reflections) on June 29th and, at an unspecified point, Red Dead Redemption 2. You can also expect to see DLSS in more Linux titles. A driver update arriving on June 22nd will enable DLSS in Vulkan-based games using the Proton compatibility layer. If a Windows game isn't quite running smoothly enough on your Linux rig, the AI-powered tech might make it more enjoyable.
Silicon Valley adaptive computing bellwether Xilinx announced its entrance into the growing system-on-module (SOM) market today, with a portfolio of palm-sized compute modules for embedded applications that accelerate AI, machine learning and vision at the edge. Xilinx Kria will eventually expand into a family of single board computers based on reconfigurable FPGA (Field Programmable Gate Array) technology, coupled to Arm core CPU engines and a full software stack with an app store, the first of which is specifically is targeted at AI machine vision and inference applications. The Xilinx Kria K26 SOM employs the company's UltraScale multi-processor system on a chip (MPSoC) architecture, which sports a quad-core Arm Cortex A53 CPU, along with over 250 thousand logic cells and an H.264/265 video compression / decompression engine (CODEC). This may sound like alphabet soup as I spit out acronyms, however, the underlying solution is a compelling offering for developers and engineers looking to give new intelligent systems, in industries like security, smart cities, retail analytics, autonomous machines and robotics, the ability to see, infer information and adapt to their deployments in the field. Also on board the Xilinx Kria K26 SOM is 4GB of DDR4 memory and 245 general purpose IO, along with the ability to support 15 cameras, up to 40 Gbps of combined Ethernet throughput, and four USB 2/3 compatible ports.
THERE'S AN APOCRYPHAL story about how NVIDIA pivoted from games and graphics hardware to dominate AI chips – and it involves cats. Back in 2010, Bill Dally, now chief scientist at NVIDIA, was having breakfast with a former colleague from Stanford University, the computer scientist Andrew Ng, who was working on a project with Google. "He was trying to find cats on the internet – he didn't put it that way, but that's what he was doing," Dally says. Ng was working at the Google X lab on a project to build a neural network that could learn on its own. The neural network was shown ten million YouTube videos and learned how to pick out human faces, bodies and cats – but to do so accurately, the system required thousands of CPUs (central processing units), the workhorse processors that power computers. "I said, 'I bet we could do it with just a few GPUs,'" Dally says. GPUs (graphics processing units) are specialised for more intense workloads such as 3D rendering – and that makes them better than CPUs at powering AI. Dally turned to Bryan Catanzaro, who now leads deep learning research at NVIDIA, to make it happen.
Did you miss the opportunity to join the conversation on Artificial Intelligence and how we impact the next frontier of our humanity? First, we're so sorry that you missed it! The event took place on Saturday, 20 February 2021 at 09:00 AM Pacific Time (US & Canada). We had an incredible time together discussing our role with black leaders, top experts, and innovators from the world's best tech companies and our community. That's EXACTLY why we'll make the replay available.