Results


New AI That Makes Fake Videos May Be the End of Reality as We Know It

#artificialintelligence

A new artificial intelligence (AI) algorithm is capable of manufacturing simulated video imagery that is indiscernible from reality, say researchers at Nvidia, a California-based tech company. AI developers at the company have released details of a new project that allows its AI to generate fake videos using only minimal raw input data. The technology can render a flawlessly realistic sequence showing what a sunny street looks like when it's raining, for example, as well as what a cat or dog looks like as a different breed or even a person's face with a different facial expression. And this is video -- not photo. For their work, researchers tweaked a familiar algorithm, known as a generative adversarial network (GAN), to allow their AI to create fresh visual data.


Global Automotive Artificial Intelligence Market Outlook 2017-2023 - The Big Three are Ford Motor Company, General Motors, and Fiato Chrysler Automotive - Research and Markets

@machinelearnbot

DUBLIN--(BUSINESS WIRE)--The "Automotive Artificial Intelligence - Global Market Outlook (2017-2023)" report has been added to Research and Markets' offering. The Global Automotive Artificial Intelligence Market accounted for $563.58 million in 2016 and is expected to reach $5,265.81 million by 2023 growing at a CAGR of 37.6% during the forecast period. The automotive industry has seen the promise of artificial intelligence (AI) technology, and is among the industries at the forefront of using AI to augment human actions and to mimic the actions of humans. The arrival of standards such as the adaptive cruise control (ACC), blind spot alert, and advanced driver assistance systems (ADAS) and rising demand for convenience and safety presents an opportunity for OEMs to build up novel and innovative artificial intelligence systems that would attract customers. Although 2016 was spoiled by some technological failures in self-driving cars, the year als-observed a couple of successful test runs in the US.


Nvidia looks to reduce AI training material through 'imagination'

ZDNet

Nvidia researchers have used a pair of generative adversarial networks (GANs) along with some unsupervised learning to create an image-to-image translation network that could allow for artificial intelligence (AI) training times to be reduced. In a blog post, the company explained how its GANs are trained on different data sets, but share a "latent space assumption" that allows for the generation of images by passing the image representation from one GAN to the next. "The use of GANs isn't novel in unsupervised learning, but the Nvidia research produced results -- with shadows peeking through thick foliage under partly cloudy skies -- far ahead of anything seen before," the company said. The benefits of this work could allow for network training to require less labelled data, it said. "For self-driving cars alone, training data could be captured once and then simulated across a variety of virtual conditions: Sunny, cloudy, snowy, rainy, nighttime, etc," Nvidia said.


Nvidia CEO: Gaming will be huge, but so will AI and data center businesses

#artificialintelligence

Nvidia reported a stellar quarter for the three months ended October 31. Nvidia had $2.6 billion in revenue in the quarter, and $1.5 billion of it came from graphics chips for gaming PCs. But the company's investment in artificial intelligence chips is paying off, with data center growing beyond $500 million in revenue for the first time. Jensen Huang, CEO of Santa Clara, California-based Nvidia, said his company started investing in AI seven years ago, and that its latest AI chips are the result of years of work by several thousand engineers. That has given the company an edge in AI, and other rivals are scrambling to keep up, he said.


Nvidia CEO: Gaming will be huge, but so will AI and data center businesses

#artificialintelligence

Nvidia reported a stellar quarter for the three months ended October 31. Nvidia had $2.6 billion in revenue in the quarter, and $1.5 billion of it came from graphics chips for gaming PCs. But the company's investment in artificial intelligence chips is paying off, with data center growing beyond $500 million in revenue for the first time. Jensen Huang, CEO of Santa Clara, California-based Nvidia, said his company started investing in AI seven years ago, and that its latest AI chips are the result of years of work by several thousand engineers. That has given the company an edge in AI, and other rivals are scrambling to keep up, he said.


NVidia $NVDA Earnings Glow With Machine Learning, AI, Bitcoin & Gaming Graphics

#artificialintelligence

High-performance graphics chip pioneer NVidia reported better than expected fiscal Q3 earnings after the market close on Thursday. NVidia raised the dividend 7 percent to 15 cents a share and intends to return $1.25 billion to shareholders during the next fiscal year. NVidia consistently trumps analysts' expectations and comes just after Sony and gaming stocks Take Two Interactive $TTWO and Activision Blizzard $ATVI delivered strong earnings. NVIDA chips, graphics processors are gaming industry standouts for personal computers and video game consoles. Earnings: EPS $1.33 way ahead of analysts expected EPS of 94 cents on Revenue of $2.64 Billio beats expected 18% revenue growth to $2.36 billion.. "We had a great quarter across all of our growth drivers.


Nvidia steps up its transition to an AI company

#artificialintelligence

Nvidia reported earnings that beat expectations and showed that the company's focus on artificial intelligence is still paying off. For the past decade, Nvidia has been rising above graphics chips for gamers, expanding to parallel processing in data centers and lately to artificial intelligence processing for deep learning neural networks and self-driving cars. The company reported earnings per share of $1.33 (up 60 percent from a year ago) on revenue of $2.6 billion (up 32 percent), beating Wall Street's expectations. The company's stock price is up more than 100 percent in the past year on the popularity of artificial intelligence. But it slumped during the day on Thursday, along with the broader market.


How to Invest in Artificial Intelligence

#artificialintelligence

There's a heated debate among the tech elite about whether artificial intelligence will destroy or enhance human life. Tesla founder Elon Musk has been sounding the alarms over AI for months, saying in September that AI will be the cause for World War III. Facebook founder Mark Zuckerberg, meanwhile, counters that AI will be a benefit to the world. Bryan Borzykowski is a Toronto-based business and investments writer. He's contributed to the New York Times, CNBC, BBC Capital, CNNMoney and several other publications.


Nvidia's new supercomputer is designed to drive fully autonomous vehicles

Mashable

Nvidia wants to make it easier for automotive companies to build self-driving cars, so it's releasing a brand new supercomputer designed to drive them. The chipmaker claims its new supercomputer is the world's first artificial intelligence computer designed for "Level 5" autonomy, which means vehicles that can operate themselves without any human intervention. The new computer will be part of Nvidia's existing Drive PX platform, which the GPU-maker offers to automotive companies in order to provide the processing power for their self-driving car systems. Huang announced Nvidia will soon release a new software development kit (SDK), Drive IX, that will help developers to build new AI-partner programs to improve in-car experience.


To Compete With New Rivals, Chipmaker Nvidia Shares Its Secrets

#artificialintelligence

Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. In a tweet this week, one Intel engineer called Nvidia's open source tactic a "devastating blow" to startups working on deep learning chips.