If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Like other major hyperscale web companies, China's Tencent, which operates a massive network of ad, social, business, and media platforms, is increasingly reliant on two trends to keep pace. The first is not surprising--efficient, scalable cloud computing to serve internal and user demand. The second is more recent and includes a wide breadth of deep learning applications, including the company's own internally developed Mariana platform, which powers many user-facing services. When the company introduced its deep learning platform back in 2014 (at a time when companies like Baidu, Google, and others were expanding their GPU counts for speech and image recognition applications) they noted their main challenges were in providing adequate compute power and parallelism for fast model training. "For example," Mariana's creators explain, "the acoustic model of automatic speech recognition for Chinese and English in Tencent WeChat adopts a deep neural network with more than 50 million parameters, more than 15,000 senones (tied triphone model represented by one output node in a DNN output layer) and tens of billions of samples, so it would take years to train this model by a single CPU server or off-the-shelf GPU."
The Solutions Architect will have a primary role servicing our Dell business, focusing on the sales-out technical support of GPU-enabled Dell servers. What you'll be doing: · Provide technical support of our datacenter products that are included in Dell servers, including software development, training, benchmarking, and consultation during customer sales meetings · Support key company initiatives including development of Deep Learning assets, and providing support for penetration of our platform into Deep Learning research, development, and deployment. Responsibilities: · First and primary point of technical support for all NVIDIA products provided to partners · Identify and analyze all reported customer issues, and will personally solve technical issues to the extent possible. EE or CS · Demonstrated ability to write high-performance codes Ways to stand out from the crowd: · Previous experience directly with Dell · Previous experience in customer technical support role · Previous background in Deep Learning (preferred) or Machine Learning disciplines · Previous experience with CUDA, OpenACC, or a related GPU programming or HPC discipline NVIDIA is widely considered to be one of the technology world's most desirable employers.
New Zealand's IMAGR and Silicon Valley's Mashgin aim to make checking out of grocery stores and company cafeterias a walk in the park. IMAGR makes SmartCart, an ordinary grocery cart with an AI computing video camera attached. Mashgin customizes its system for each company's cafeteria, and its deep learning algorithm learns new items as more people use it. Using our TITAN X GPU and the TensorFlow deep learning framework, IMAGR initially trained its algorithms on images of grocery store products.
Forbes Shutterstock Image READ MORE 6. "There an estimated 3,000 AI startups worldwide, and many of them are building on NVIDIA's platform. They're using NVIDIA's GPUs to put AI into apps for trading stocks, shopping online and navigating drones." Read more … Aaron Tilley Writer 7. The retail sector is now best positioned to leverage AI and Deep Learning, as these new technologies are developing… 8. READ MORE AI software such as Computer Vision is being developed by startups to help retail consumers find the perfect and individualized fit. THIRD LOVE A app that enables women to find the right fitting bra from home using a mobile device and deep learning. VOLUMENTAL Offers computer vision applications for sizing shoes and eyewear to create a individualized retail experience for customers.
On Monday, IBM announced that it collaborated with Nvidia to provide a complete package for customers wanting to jump right into the deep learning market without all the hassles of determining and setting up the perfect combination of hardware and software. The company also revealed that a cloud-based model is available as well that eliminates the need to install local hardware and software. To trace this project, we have to jump back to September when IBM launched a new series of "OpenPower" servers that rely on the company's Power8 processor. The launch was notable because this chip features integrated NVLink technology, a proprietary communications link created by Nvidia that directly connects the central processor to a Nvidia-based graphics processor, namely the Tesla P100 in this case. Server-focused x86 processors provided by Intel and AMD don't have this type of integrated connectivity between the CPU and GPU.
This jointly optimized platform runs the new Microsoft Cognitive Toolkit (formerly CNTK) on NVIDIA GPUs, including the NVIDIA DGX-1 supercomputer, which uses Pascal architecture GPUs with NVLink interconnect technology, and on Azure N-Series virtual machines, currently in preview. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Faster performance: When compared to running on CPUs, the GPU-accelerated Cognitive Toolkit performs deep learning training and inference much faster on NVIDIA GPUs available in Azure N-Series servers and on premises. Certain statements in this press release including, but not limited to the impact and benefits of NVIDIA's and Microsoft's AI acceleration collaboration, Tesla GPUs, DGX-1, the Pascal architecture, NVLink interconnect technology and the Microsoft Cognitive Toolkit; the availability of Azure N-Series virtual machines; and the continuation of NVIDIA's and Microsoft's collaboration are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations.
But not all companies can afford that level of resources for deep learning, so they turn to cloud services, where servers in remote data centers do the heavy lifting. But Azure uses older Nvidia GPUs, and it now has competition from Nimbix, which offers a cloud service with faster GPUs based on the Nvidia's latest Pascal architecture. Nimbix offers customers cloud services that run on Tesla P100s -- which are among Nvidia's fastest GPUs -- in IBM Power S822LC servers. Microsoft's Azure offers cloud services with servers running Nvidia's Tesla K80, which is based on the older Kepler architecture, and Tesla M40, which is based on Maxwell, a generation behind Pascal.
Cirrascale Corporation, a premier developer of server and cloud solutions enabling GPU-driven deep learning infrastructure, has announced the future availability of the IBM Power Systems S822LC for HPC in multiple configurations for its GPU-as-a-Service cloud platform. "With the Cirrascale cloud platform, powered by Pascal-based NVIDIA Tesla P100 GPU accelerators and NVLink, scientists and HPC users can deploy scalable compute resources on demand that deliver a dramatic boost in throughput for deep learning workloads." Customers interested in renting time can visit www.gpuasaservice.com Cirrascale Corporation is a premier developer of hardware and cloud-based solutions enabling GPU-driven deep learning infrastructure. The company sells hardware solutions to large-scale deep learning infrastructure operators, hosting and cloud service providers, and HPC users.
In a new initiative, UK-based PC systems maker and retailer Scan 3XS is providing remote access to Nvidia DGX-1 Deep Learning Supercomputers. To allow customers to decide whether the significant investment involved in acquiring a DGX-1 is for them, Scan has begun a DGX-1 Proof of Concept program to allow end users to run custom data processing tests on one of its own deep learning machines. With such a system "you can immediately shorten data processing time, visualize more data, accelerate deep learning frameworks, and design more sophisticated neural networks," says Nvidia. At its heart the DGX-1 is based around 8x Nvidia Tesla P100 GPU accelerators based upon Nvidia's newest Pascal architecture.
In the public cloud business, scale is everything – hyper, in fact – and having too many different kinds of compute, storage, or networking makes support more complex and investment in infrastructure more costly. We have estimated the single precision (SP) and double precision (DP) floating point performance of the GRID K520 card, and the G2 instances have either one or four of these fired up with an appropriate amount of CPU to back them. The P2 instances deliver a lot better bang for the buck, particularly on double precision floating point work. For single precision floating point, the price drop per teraflops is only around 22 percent from the G2 instances to the P2 instances for single precision work, but the compute density of the node has gone up by a factor of 7.1X and the GPU memory capacity has gone up by a factor of 12X within a single node, which doesn't affect users all that much directly but does help Amazon provide GPU processing at a lower cost because it takes fewer servers and GPUs to deliver a chunk of teraflops.