Results


AI everywhere

#artificialintelligence

"We invented a computing model called GPU accelerated computing and we introduced it almost slightly over 10 years ago," Huang said, noting that while AI is only recently dominating tech news headlines, the company was working on the foundation long before that. Nvidia's tech now resides in many of the world's most powerful supercomputers, and the applications include fields that were once considered beyond the realm of modern computing capabilities. Now, Nvidia's graphics hardware occupies a more pivotal role, according to Huang – and the company's long list of high-profile partners, including Microsoft, Facebook and others, bears him out. GTC, in other words, has evolved into arguably the biggest developer event focused on artificial intelligence in the world.


Baidu Advances AI in the Cloud with Latest NVIDIA Pascal GPUs

#artificialintelligence

SANTA CLARA, CA--(Marketwired - Apr 17, 2017) - NVIDIA ( NASDAQ: NVDA) today announced that its deep learning platform is now available as part of Baidu Cloud's deep learning service, giving enterprise customers instant access to the world's most adopted AI tools. The new Baidu Cloud offers the latest GPU computing technology, including Pascal architecture-based NVIDIA Tesla P40 GPUs and NVIDIA deep learning software. It provides both training and inference acceleration for open-source deep learning frameworks, such as TensorFlow and PaddlePaddle. "Baidu and NVIDIA are long-time partners in advancing the state of the art in AI," said Ian Buck, general manager of Accelerated Computing at NVIDIA. "Baidu understands that enterprises need GPU computing to process the massive volumes of data needed for deep learning.


Microsoft Infuses SQL Server With Artificial Intelligence

#artificialintelligence

SQL Server 2017, which will run on both Windows and Linux, is inching closer to release with a set of artificial intelligence capabilities that will change the way enterprises derive value from their business data, according to Microsoft. The Redmond, Wash., software giant on April 19 released SQL Server 2017 Community Technology Preview (CTP) 2.0. Joseph Sirosh, corporate vice president of the Microsoft Data Group, described the "production-quality" database software as "the first RDBMS [relational database management system] with built-in AI." Download links and instructions on installing the preview on Linux are available in this TechNet post from the SQL Server team at Microsoft. It's no secret to anyone keeping tabs on Microsoft lately that the company is betting big on AI, progressively baking its machine learning and cognitive computing technologies into a wide array of the company's cloud services, business software offerings and consumer products. "In this preview release, we are introducing in-database support for a rich library of machine learning functions, and now for the first time Python support (in addition to R)," stated Sirosh, in the April 19 announcement.


Facebook's Caffe2 AI tools come to iPhone, Android, and Raspberry Pi

PCWorld

New intelligence can be added to mobile devices like the iPhone, Android devices, and low-power computers like Raspberry Pi with Facebook's new open-source Caffe2 deep-learning framework. Caffe2 can be used to program artificial intelligence features into smartphones and tablets, allowing them to recognize images, video, text, and speech and be more situationally aware. It's important to note that Caffe2 is not an AI program, but a tool allowing AI to be programmed into smartphones. It takes just a few lines of code to write learning models, which can then be bundled into apps. The release of Caffe2 is significant.


Smart Machines Need Smart Silicon

#artificialintelligence

It seems like even the biggest hyperscale platform developers who have long touted software-defined architectures as the key to computing nirvana are starting to learn a cardinal rule of infrastructure: No matter how much you try to abstract it, basic hardware still matters. A key example of this is Google's Tensor Processing Unit (TPU), which the company designed specifically for machine learning and other crucial workflows that were starting to push the limits of available CPUs and GPUs. In fact, the company says that without the TPU, it was looking at doubling its data center footprint in order to support applications like voice recognition and image search. The TPU is custom-designed to work with the TensorFlow software library, generating results 15 to 30 times faster than state-of-the-art Intel Haswell or Nvidia K80 devices. This may seem like a harbinger of bad times ahead for Intel and Nvidia, but the broader picture is a bit more muddled.


Competition in AI platform market to heat up in 2017

#artificialintelligence

Intel's Nervana platform is a $400 million investment in AI Back in November, Intel announced what it claims is a comprehensive AI platform for data center and compute applications called Nervana, with its focus aimed directly at taking on Nvidia's GPU solutions for enterprise users. The platform is the result of the chipmaker's acquisition of 48-person startup Nervana Systems back in August for $400 million that was led by former Qualcomm researcher Naveen Rao. Built using FPGA technology and designed for highly-optimized AI solutions, Intel claims Nervana will deliver up to a 100-fold reduction in the time it takes to train a deep learning model within the next three years. The company intends to integrate Nervana technology into Xeon and Xeon Phi processor lineups. During Q1, it will test the Nervana Engine chip, codenamed'Lake Crest,' and make it available to key customers later within the year.


Deep Learning Institute Workshop hosted by Dedicated Computing, NVIDIA and Milwaukee School of Engineering

#artificialintelligence

Dedicated Computing is co-hosting a Deep Learning Institute workshop in collaboration with NVIDIA and Milwaukee School of Engineering (MSOE). The workshop will take place at MSOE on April 13, 2017. Deep learning is a new area of machine learning that seeks to use algorithms, big data, and parallel computing to enable real-world applications and deliver results. Machines are now able to learn at the speed, accuracy, and scale required for true artificial intelligence. This technology is used to improve self-driving cars, aid mega-city planners, and help discover new drugs to cure disease.


Rapid GPU Evolution at Chinese Web Giant Tencent

#artificialintelligence

Like other major hyperscale web companies, China's Tencent, which operates a massive network of ad, social, business, and media platforms, is increasingly reliant on two trends to keep pace. The first is not surprising--efficient, scalable cloud computing to serve internal and user demand. The second is more recent and includes a wide breadth of deep learning applications, including the company's own internally developed Mariana platform, which powers many user-facing services. When the company introduced its deep learning platform back in 2014 (at a time when companies like Baidu, Google, and others were expanding their GPU counts for speech and image recognition applications) they noted their main challenges were in providing adequate compute power and parallelism for fast model training. "For example," Mariana's creators explain, "the acoustic model of automatic speech recognition for Chinese and English in Tencent WeChat adopts a deep neural network with more than 50 million parameters, more than 15,000 senones (tied triphone model represented by one output node in a DNN output layer) and tens of billions of samples, so it would take years to train this model by a single CPU server or off-the-shelf GPU."


The New Intel: How Nvidia Went From Powering Video Games To Revolutionizing Artificial Intelligence

#artificialintelligence

Nvidia cofounder Chris Malachowsky is eating a sausage omelet and sipping burnt coffee in a Denny's off the Berryessa overpass in San Jose. It was in this same dingy diner in April 1993 that three young electrical engineers--Malachowsky, Curtis Priem and Nvidia's current CEO, Jen-Hsun Huang--started a company devoted to making specialized chips that would generate faster and more realistic graphics for video games. East San Jose was a rough part of town back then--the front of the restaurant was pocked with bullet holes from people shooting at parked cop cars--and no one could have guessed that the three men drinking endless cups of coffee were laying the foundation for a company that would define computing in the early 21st century in the same way that Intel did in the 1990s. "There was no market in 1993, but we saw a wave coming," Malachowsky says. "There's a California surfing competition that happens in a five-month window every year.


1 Company Is Already Winning AI -- The Motley Fool

#artificialintelligence

NVIDIA (NASDAQ:NVDA) is primarily known as the company that revolutionized computer gaming. The debut of the Graphics Processing Unit (GPU) in 1999 provided gamers with faster, clearer, and more lifelike images. The GPU was designed to quickly perform complex mathematical calculations that were necessary to accelerate the creation of realistic graphics. It achieved this feat by performing many functions at the same time, known as parallel computing. This resulted in faster, smoother motion in game graphics and a revolution in modern gaming.