LAS VEGAS, NV--(Marketwired - Jan 4, 2017) - CES -- NVIDIA (NASDAQ: NVDA) today unveiled the new NVIDIA SHIELD TV -- an Android open-platform media streamer built on bleeding-edge visual computing technology that delivers unmatched experiences in streaming, gaming and AI. Sporting a sleek, new design and now shipping with both a remote and a game controller, SHIELD provides the best, most complete entertainment experience in the living room. "NVIDIA's rich heritage in visual computing and deep learning has enabled us to create this revolutionary device," said Jen-Hsun Huang, founder and chief executive officer of NVIDIA, who revealed SHIELD during his opening keynote address at CES. "SHIELD TV is the world's most advanced streamer. Its brilliant 4K HDR quality, hallmark NVIDIA gaming performance and broad access to media content will bring families hours of joy. And with SHIELD's new AI home capability, we can control and interact with content through the magic of artificial intelligence from anywhere in the house," he said.
What makes Bach sound like Bach? MusicNet is a new publicly available dataset from UW researchers that labels each note of 330 classical compositions in ways that can teach machine learning algorithms about the basic structure of music.Yngve Bakken Nilsen, flickr The composer Johann Sebastian Bach left behind an incomplete fugue upon his death, either as an unfinished work or perhaps as a puzzle for future composers to solve. A classical music dataset released Wednesday by University of Washington researchers -- which enables machine learning algorithms to learn the features of classical music from scratch -- raises the likelihood that a computer could expertly finish the job. MusicNet is the first publicly available large-scale classical music dataset with curated fine-level annotations. It's designed to allow machine learning researchers and algorithms to tackle a wide range of open challenges -- from note prediction to automated music transcription to offering listening recommendations based on the structure of a song a person likes, instead of relying on generic tags or what other customers have purchased. "At a high level, we're interested in what makes music appealing to the ears, how we can better understand composition, or the essence of what makes Bach sound like Bach.
Tech giant Intel has announced a new strategy focused on artificial intelligence (AI) and based on a new portfolio of technologies centered on its recent acquisition of AI company Nervana Systems. The new portfolio will include products and services for everything from network edge to data center use cases to help accelerate the growth of AI technologies. "Intel sees AI transforming the way businesses operate and how people engage with the world," the company said in a statement yesterday. "Intel is assembling the broadest set of technology options to drive AI capabilities in everything from smart factories and drones to sports, fraud detection and autonomous cars." Dubbed Intel Nervana, the new platform comes courtesy of the company's acquisition of the two-year-old Nervana Systems announced three months ago.
"Intel sees AI transforming the way businesses operate and how people engage with the world," the company said in a statement yesterday. "Intel is assembling the broadest set of technology options to drive AI capabilities in everything from smart factories and drones to sports, fraud detection and autonomous cars." Dubbed Intel Nervana, the new platform comes courtesy of the company's acquisition of the two-year-old Nervana Systems announced three months ago. The platform will be optimized for AI workloads with an emphasis on both speed and ease of use. The first product in the platform, a chip codenamed "Lake Crest," will begin testing in the first half of next year and will eventually be available to key customers later that year, according to Intel.
With up to 16 NVIDIA Tesla K80 GPUs, P2 instances are the most powerful GPU instances available in the cloud. "The massive parallel floating point performance of Amazon EC2 P2 instances, combined with up to 64 vCPUs and 732 GB host memory, will enable customers to realize results faster and process larger datasets than was previously possible." P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments. To offer the best performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.
ConnectX-5 introduces smart offloading engines that enable the highest application performance while maximizing data center return on investment. With the exponential growth of data and the increase in businesses that takes advantage of real-time data processing for high performance computing (HPC), data analytics, machine learning, national security and'Internet of Things' applications, the market needs not only the fastest interconnect available, but also interconnect intelligence that can perform data algorithms as the data moves throughout the data center. "The new ConnectX-5 100G adapter further enables high performance, data analytics, deep learning, storage, Web 2.0 and more applications to perform data-related algorithms on the network to achieve the highest system performance and utilization," said Gilad Shainer, vice president, marketing at Mellanox Technologies. "Dell and Mellanox are long-time partners, delivering Dell HPC Systems utilizing Mellanox InfiniBand, including the recently announced one petaflops system at the Center for High Performance Computing in South Africa," said Jim Ganthier, vice president and general manager, Engineered Solutions, HPC and Cloud, Dell.
"Hindsait's main goal has been to develop a robust, AI platform that specifically addresses the needs of healthcare organizations," said Frost & Sullivan Research Analyst Harpreet Singh Buttar. "This platform assists in reducing unnecessary health services, eliminating errors and biases in care delivery and improving overall quality of care." Hindsait's system has proven to be highly adaptable and scalable, based on unique use case requirements. Their capabilities range from natural language processing (NLP) and machine learning to cognitive computing and predictive analytics that directly helps providers and payers resolve healthcare delivery issues. Hindsait boasts a wide range of services, right from analyzing unstructured data, such as clinical notes, patient charts, and prescriptions, to real-time optimization of diagnostic and treatment plans.
SAN DIEGO, CA--(Marketwired - Jun 6, 2016) - KnuEdge Inc., a neural technology innovation company that launched today, introduced its KNUPATH LambdaFabric processor technology enabling ground-breaking scalability, latency and workload performance in next-generation data centers. With a fundamentally different architecture than legacy products, KNUPATH can operate alone or be integrated with other devices, and it is available now to both end customers and technology vendors seeking to create data center neural computing capabilities to support advancements in machine learning, IoT and signal processing. "Many of today's CPUs, GPUs and FPGAs force system designers to either create workarounds with last-generation chip sets or reduce their requirements for advanced-compute projects," said Dan Goldin, Founder and CEO of KnuEdge. "After ten years of stealth development and rigorous testing, LambdaFabric enters the market as mature technology which enables system designers to meet the most demanding requirements now, and also helps them rethink what is possible with neural computing in the future." As evidenced by recent announcements such as Google's Tensor Processing Unit, there is increasing industry interest in targeted processor acceleration for machine learning and other growing workloads.
Using advanced machine learning algorithms, Device Analyzer provides users with greater visibility into the operational state of each of their monitored devices. "Marking the effective integration of machine learning technology and data science, Device Analyzer introduces breakthrough architectural innovations that support process optimization across the PowerRadar software platform," said Yaniv Vardi, CEO of Panoramic Power. The first implementation of Device Analyzer is manifested in PowerRadar Version 2.0, where the system collects device level energy data, automatically learns device patterns and after a short training period automatically identifies the different operational states of each device. Panoramic Power, recently acquired by Direct Energy through its parent company Centrica, enables businesses to optimize their energy consumption and improve system level performance and operations.
With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. We also extended our VR platform by adding special kits to our VR work software development kit that helps to provide an even greater sense of presence with NVR. The P100 utilizes a combination of technologies including NVLink, our high speed interconnect to learning application performance to scale on multiple GPUs, primarily bandwidth and multiple hardcore features design to natively accelerate AI applications. Universities hyperscale vendors and large enterprises developing AI based applications are showing strong interest in the system.