Goto

Collaborating Authors

 supermicro


Cortical.io Launches "Message Intelligence" to Tackle Formidable Enterprise Communications Challenges

#artificialintelligence

Enterprises, in particular, are overwhelmed with communications from inside and outside of their organizations. Corporate email continues to be at the heart of this flood of data, with content that needs to be acted on in a timely manner. Consider these facts: – In 2019, more than 128.8 billion business emails were sent and received per day. This impairs productivity in matters where speed is critical, resulting in operational delays and rising costs. The product accomplishes this by understanding the semantic content--the meaning and intent of the messages – at massive scale in real time.


Supermicro Accelerates AI and Deep Learning with NGC-Ready Servers - insideHPC

#artificialintelligence

Today Supermicro announced the industry's broadest portfolio of validated NGC-Ready systems optimized to accelerate AI and deep learning applications. Supermicro is highlighting many of these systems today at the Supermicro GPU Live Forum in conjunction with NVIDIA GTC Digital. Supermicro NGC-Ready systems allow customers to train AI models using NVIDIA V100 Tensor Core GPUs and to perform inference using NVIDIA T4 Tensor Core GPUs. NGC hosts GPU-optimized software containers for deep learning, machine learning and HPC applications, pre-trained models, and SDKs that can run anywhere the Supermicro NGC-Ready systems are deployed whether in data centers, cloud, edge micro-datacenters, or in distributed remote locations as environment-resilient and secured NVIDIA-Ready for Edge servers powered by the NVIDIA EGX intelligent edge platform. With over 26 years of experience delivering state-of-the-art computing solutions, Supermicro systems are the most power-efficient, the highest performing, and the best value," said Charles Liang, CEO and president of Supermicro. "With support for fast networking and storage, as well as NVIDIA GPUs, our Supermicro NGC-Ready systems are the most scalable and reliable servers to support AI. Customers can run their AI infrastructure with the highest ROI." Supermicro currently leads the industry with the broadest portfolio of NGC-Ready Servers optimized for data center and cloud deployments and is continuing to expand its portfolio. In addition, the company offers five validated NGC-Ready for Edge servers (EGX) optimized for edge inferencing applications. NVIDIA's container registry, NGC, enables superior performance for deep learning frameworks and pre-trained AI models with state-of-the-art accuracy," said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA.


Cortical.io Announces the First Application of Real-Time Semantic

#artificialintelligence

Cortical.io, a leader in AI-based Natural Language Understanding (NLU) solutions, announced the debut of a new class of high-performance enterprise applications based on "Semantic Supercomputing." AI-based NLU software inspired by neuroscience with hardware acceleration to create new solutions to understand and process streams of natural language content at massive scale in real time. "The demand for real-time AI services has never been greater and, together with Cortical.io "Ever-increasing unstructured data is overwhelming the world and the available processing power and current statistical approaches to deal with it," said Francisco Webber, co-founder and CEO of Cortical.io. "We are taking the concept of supercomputing to the next level with the introduction of Semantic Supercomputing and the ability to deliver real-time processing of semantic content." The first application of Semantic Supercomputing, a Messaging Classification Appliance that can filter, classify and route streams of messages in real time by understanding the semantic content – the meaning and intent of the messages was unveiled today at Xilinx Developer Forum (XDF) Europe keynote session at the Xilinx, Inc. developer conference held November 12-13 in The Hague. Building on the strategic relationship announced with Xilinx at last month's at XDF Americas in San Jose, Cortical.io is developing this first of a series of FPGA-based appliances powered by Xilinx Alveo accelerator cards. The appliance will enable enterprises to filter and route massive volumes of email messages in real time with high precision and recall based on the meaning of the message. The product will be available in Q1 2020. "The goal is to reduce the wasted efforts of handling irrelevant or misdirected emails by first line business operations – including support, sales, purchasing," said Cortical.io "The appliance will be able to handle a massive volume of messages daily in real time." Enterprise system administrators will be able to train the system and customize the filtering and routing based on a small number of sample emails. Once trained, the appliance works across multiple languages (English, Spanish, German, Portuguese, Cantonese, Arabic, French, Italian, Mandarin Chinese, Dutch). "The demand for real-time AI services has never been greater and, together with Cortical.io


Deep learning performance on Red Hat OpenShift with Supermicro

#artificialintelligence

Red Hat and Supermicro ran the AI workload, MLPerf Training v0.6, on the Red Hat OpenShift Container Platform with Supermicro hardware and compared it to the MLPerf Training v0.6 results published by Nvidia. In addition to excellent performance, we demonstrated how OpenShift provides easy access to high-performance machine learning model training when running on this Supermicro reference architecture.


Supermicro teams with WekaIO for Deep Learning Performance Density - insideHPC

#artificialintelligence

Today WekaIO announced Supermicro, a global leader in enterprise computing, storage, networking solutions and green computing technology, is an authorized OEM partner. The Supermicro BigTwin Server featuring the WekaIO File System, WekaFS, is the industry's first and only 2U multi-node system supporting the highest performance processor, memory, storage, and I/O at an incredible 30 percent better thermal capacity with the ability to lower energy consumption in the datacenter. This appliance is an integrated, preconfigured solution that delivers unmatched performance at scale. The BigTwin Server featuring WekaFS offering is a milestone in our relationship with Supermicro," said Barbara Murphy, Vice President of Marketing at WekaIO. "The solution is already in use by many customers with deep learning applications and exceeding expectations for performance and value. By offering this preconfigured solution with Supermicro, we'll be able to simplify the customers' acquisition experience."


Supermicro Unveils 2 Petaflop SuperServer Based on New NVIDIA HGX-2 - insideHPC

#artificialintelligence

Today Supermicro announced the company is among the first to adopt the NVIDIA HGX-2 cloud server platform to develop the world's most powerful systems for artificial intelligence and high-performance computing. To help address the rapidly expanding size of AI models that sometimes require weeks to train, Supermicro is developing cloud servers based on the HGX-2 platform that will deliver more than double the performance," said Charles Liang, president and CEO of Supermicro. "The HGX-2 system will enable efficient training of complex models. It combines 16 Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate memory to deliver unmatched compute power." From natural speech by computers to autonomous vehicles, rapid progress in AI has transformed entire industries. To enable these capabilities, AI models are exploding in size. HPC applications are similarly growing in complexity as they unlock new scientific insights. Supermicro's HGX-2 based systems will provide a superset design for datacenters accelerating AI and HPC in the cloud. With fine-tuned optimizations, Supermicro's HGX-2 server will deliver the highest compute performance and memory for rapid model training. As AI model complexity and size are exploding, researchers and data scientists need new levels of GPU-accelerated computing," said Ian Buck, vice president and general manager of accelerated computing at NVIDIA.


Supermicro(R) Introduces NVIDIA(R) Pasca(TM) GPU-Enabled Server Solutions Featuring NVIDIA Tesla(R) P100 GPUs

#artificialintelligence

Super Micro Computer, Inc. (SMCI), a global leader in compute, storage, networking technologies and green computing today announced the general availability of its SuperServer solutions optimized for NVIDIA Tesla P100 accelerators with the new Pascal GPU architecture. Supermicro's innovative and GPU optimized single root complex PCI-E design is proven to dramatically improve GPU peer-to-peer communication efficiency over QPI and PCI-E links, with up to 21% higher QPI throughput and 60% lower latency compared to previous generation products. "Our high-performance computing solutions enable deep learning, engineering, and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance-per-watt, per-square-foot, and per-dollar," said Charles Liang, President and CEO of Supermicro. "With our latest innovations incorporating the new NVIDIA P100 GPUs, our customers can accelerate their applications and innovations to solve the most complex real world problems." "Supermicro's new high-density servers are optimized to fully leverage the new NVIDIA Tesla P100 accelerators to provide enterprise and HPC customers with an entirely new level of computing horsepower," said Ian Buck, General Manager of the Accelerated Computing Group at NVIDIA.