If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
A lot had already been priced into Nvidia's (NVDA) shares going into the company's fiscal third-quarter report. Look for analysts to dial their estimates sharply higher after the latest results, and perhaps also wonder just how big Nvidia's long-term addressable market is....912 more words left in this article. To read them, just click below and try Real Money. Portfolio Manager Jim Cramer and Director of Research Jack Mohr reveal their investment tactics while giving advanced notice before every trade. Trifecta Stocks analyzes over 4,000 equities weekly to find the elite 1% of stocks that pass rigorous quantitative, fundamental and technical tests.
TAIPEI, TAIWAN--(Marketwired - May 31, 2016) - Computex - NVIDIA (NASDAQ: NVDA) won big at the Computex Best Choice Awards, with the NVIDIA Tesla M40 GPU and NVIDIA Jetson TX1 module hauling in Gold Awards and the NVIDIA SHIELD Android TV clinching a Category Award. Garnering these three prestigious awards extends the company's winning streak -- the longest of any international Computex exhibitor -- to eight consecutive years. Taiwan's President Tsai Ing-wen will hand out the awards. The Best Choice Awards, established in 2002, honor innovation, functionality and market potential. The Gold Award-winning NVIDIA Tesla M40 GPU is the world's fastest deep learning training accelerator.
Nvidia has staked a big chunk of its future on supplying powerful graphics chips used for artificial intelligence, so it wasn't a great day for the company when Google announced two weeks ago that it had built its own AI chip for use in its data centers. Google's Tensor Processing Unit, or TPU, was built specifically for deep learning, a branch of A.I. through which software trains itself to get better at deciphering the world around it, so it can recognize objects or understand spoken language, for example. TPUs have been in use at Google for more than a year, including for search and to improve navigation in Google Maps. They provide "an order of magnitude better-optimized performance per watt for machine learning" compared to other options, according to Google. That could be bad news for Nvidia, which designed its new Pascal microarchitecture with machine learning in mind.
Microsoft has been using a type of programmable chip called Field Programmable Gate Arrays to improve its hardware for machine learning, which typically requires a large amount of computing power. Last year, Google released Tensor Flow, the software engine that powers its machine learning systems, free to the public via an open-source license. But while Google's chip is helping improve its machine learning tools, the company likely isn't in a position to abandon GPUs and processors made by other companies entirely, Patrick Moorhead, an analyst at Moore Insights & Strategy, told PCWorld. It began using the TPU last April to help its StreetView software better process images, Jouppi told the Journal, speeding up the processing time for all of its images to just five days.
In the company's largest business segment, gaming, NVIDIA's revenue was up 17% year over year to 687 million. That's an increase of 47% for automotive technology sales and 63% for datacenter revenue, year over year. Aside from revenue growth, gross margins remained relatively flat year over year at 57.5%, up just 80 basis points. Operating expenses increased 6%, but operating income increased by 39% to 245 million.
"The work that recently was done at Microsoft Research, they've achieved superhuman levels of inferencing … of image recognition and voice recognition that's really kind of hard to imagine," he said, "and these networks are now huge." Nvidia's chips power IBM (IBM) Watson and Facebook's Big Sur server, Huang said. For Q1, Nvidia reported 1.3 billion in sales and 33 cents earnings per share, up 13% and 38% year over year, respectively. Despite continued headwinds in the PC and smartphone markets, "gaming continues to appear to have macro immunity," he wrote.
The company enjoyed strong sales growth in three out of five reportable business segments, as products based on the long-awaited Pascal architecture started rolling out. "Accelerating our growth is deep learning, a new computing model that uses the GPU's massive computing power to learn artificial intelligence algorithms. "Our new Pascal GPU architecture will give a giant boost to deep learning, gaming and VR." Delivering on these promises would be game changing, taking the wind out of arch-rival Advanced Micro Devices' sails; that is, unless AMD's upcoming Polaris architecture can match Polaris blow for blow.
Graphics processor specialist NVIDIA (NASDAQ:NVDA) reported results on Thursday night, covering the first quarter of fiscal year 2017. "Our new Pascal GPU architecture will give a giant boost to deep learning, gaming and VR." Delivering on these promises would be game changing, taking the wind out of arch-rival Advanced Micro Devices' (NASDAQ:AMD) sails; that is, unless AMD's upcoming Polaris architecture can match Polaris blow for blow. Anders Bylund is a Foolish Technology and Entertainment Specialist.
Tesla Motors (TSLA) partner Nvidia (NVDA) rocketed late Thursday after the maker of graphics chips beat Q1 sales expectations and topped earnings views by a penny, led by faster adoption of artificial intelligence technology that utilizes Nvidia graphics chips. "Accelerating our growth is deep learning, a new computing model that uses the GPU's (graphics processing unit) massive computing power to learn artificial intelligence algorithms," he said in the company's earnings release. "Our new Pascal GPU (graphics processing unit) architecture will give a giant boost to deep learning, gaming and VR (virtual reality)," he said. For the current quarter, Nvidia expects 1.35 billion in sales, plus or minus 2%, which would be up 17% at the midpoint vs. the year-ago quarter.
Based on the new NVIDIA Pascal GP100 GPU and powered by ground-breaking technologies, Tesla P100 delivers the highest absolute performance for HPC, technical computing, deep learning, and many computationally intensive datacenter workloads. Unlike other technical computing applications that require high-precision floating-point computation, deep neural network architectures have a natural resilience to errors due to the backpropagation algorithm used in their training. Kepler significantly increased the throughput of atomic operations to global memory compared to the earlier Fermi architecture; however, both Fermi and Kepler implemented shared memory atomics using an expensive lock/update/unlock pattern. Maxwell improved this by implementing native hardware support for shared memory atomic operations for 32-bit integers, and native shared memory 32-bit and 64-bit compare-and-swap (CAS), which can be used to implement other atomic functions with reduced overhead (compared to the Fermi and Kepler methods which were implemented in software).