Results


Pure Storage Announces Significant Customer Momentum for AI and ML Workloads - insideBIGDATA

#artificialintelligence

Pure Storage (NYSE: PSTG), a leading independent all-flash data platform vendor for the cloud era, announced significant customer momentum for FlashBlade, the system purpose-built for modern analytics. Since general availability in January 2017, FlashBlade has gained traction among organizations running and innovating with emerging workloads, specifically modern analytics, artificial intelligence (AI) and machine learning (ML). Data is at the center of the modern analytics revolution. Large amounts of data must be delivered to the parallel processors, like multi-core CPUs and GPUs, at incredibly high speeds in order to train machine learning and analytic algorithms faster and more accurately. Today, most machine learning production is undertaken by hyperscalers and large, web-scale companies.


Fujitsu adds deep learning to nVidia GPUs

@machinelearnbot

Fujitsu is rising to this challenge by introducing native deep learning processing capabilities to select Fujitsu Primergy CX and RX server models. To achieve the highest possible levels of system performance, Fujitsu is introducing native support for NVIDIA GPUs via direct connection to the mainboard. Connected either via plug-in PCIe cards or the nVidia NVLink high-speed interconnect, Fujitsu Primergy servers provide access to more than 100 Teraflops per second (tflops) of deep learning performance. The first models to offer native support for nVidia Volta GPUs are the Fujitsu Server Primergy CX2570 M4, as one component of the modular CX400 M4 scale-out ecosystem, and the Fujitsu Server Primergy RX2540 M4.


AWS AI Blog

#artificialintelligence

Second, framework developers need to maintain multiple backends to guarantee performance on hardware ranging from smartphone chips to data center GPUs. Diverse AI frameworks and hardware bring huge benefits to users, but it is very challenging to AI developers to deliver consistent results to end users. Motivated by the compiler technology, a group of researchers including Tianqi Chen, Thierry Moreau, Haichen Shen, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy from Paul G. Allen School of Computer Science & Engineering, University of Washington, together with Ziheng Jiang from the AWS AI team, introduced the TVM stack to simplify this problem. Today, AWS is excited to announce, together with the research team from UW, an end-to-end compiler based on the TVM stack that compiles workloads directly from various deep learning frontends into optimized machine codes.


WWPI – Covering the best in IT since 1980 » Blog Archive » Oracle announces autonomous database cloud to boost performance and increase machine learning

#artificialintelligence

Oracle Autonomous Data Warehouse Cloud is a next-generation cloud service built on the self-driving Oracle Autonomous Database technology using machine learning to deliver enhanced performance, reliability and ease of deployment for data warehouses. The Oracle Autonomous Database Cloud eliminates the human labor associated with tuning, patching, updating and maintaining the database and includes self-driving that provides continuous adaptive performance tuning based on machine learning. Unlike traditional cloud services with complex, manual configurations that require a database expert to specify data distribution keys and sort keys, build indexes, reorganize data or adjust compression, Oracle Autonomous Data Warehouse Cloud is a simple "load and go" service. Unlike traditional cloud services, which use generic compute shapes for database cloud services, Oracle Autonomous Data Warehouse Cloud is built on the high-performance Oracle Exadata platform.


Intel Invests $1 Billion in the AI Ecosystem to Fuel Adoption and Product Innovation Intel Newsroom

#artificialintelligence

At Intel, we have an optimistic and pragmatic view of artificial intelligence's (AI) impact on society, jobs and daily life that will mimic other profound transformations – from the industrial to the PC revolutions. To drive AI innovation, Intel is making strategic investments spanning technology, R&D and partnerships with business, government, academia and community groups. We have also invested in startups like Mighty AI*, Data Robot* and Lumiata* through our Intel Capital portfolio and have invested more than $1 billion in companies that are helping to advance artificial intelligence. To support the sheer breadth of future AI workloads, businesses will need unmatched flexibility and infrastructure optimization so that both highly specialized and general purpose AI functions can run alongside other critical business workloads.


We are making on-device AI ubiquitous

#artificialintelligence

You may have heard this vision or may think that AI is really about big data and the cloud, and yet Qualcomm's solutions already have the power, thermal, and processing efficiency to run powerful AI algorithms on the actual device -- which brings several advantages. We've also had our own success at the ImageNet Challenge using deep learning techniques, placing as a top-3 performer in challenges for object localization, object detection, and scene classification. We have also expanded our own research and collaborated with the external AI community into other promising areas and applications of machine learning, like recurrent neural networks, object tracking, natural language processing, and handwriting recognition. As an example, at this year's F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook's open source deep learning framework, and the NPE framework.


Introducing Social Hash Partitioner, a scalable distributed hypergraph partitioner

#artificialintelligence

As a single host has limited storage and compute resources, our storage systems shard data items over multiple hosts and our batch jobs execute over clusters of thousands of workers, to scale and speed-up the computation. Our VLDB'17 paper, Social Hash Partitioner: A Scalable Distributed Hypergraph Partitioner, describes a new method for partitioning bipartite graphs while minimizing fan-out. We describe the resulting framework as a Social Hash Partitioner (SHP) because it can be used as the hypergraph partitioning component of the Social Hash framework introduced in our earlier NSDI'16 paper. The fan-out reduction model is applicable to many infrastructure optimization problems at Facebook, like data sharding, query routing and index compression.


Introducing Social Hash Partitioner, a scalable distributed hypergraph partitioner

#artificialintelligence

As a single host has limited storage and compute resources, our storage systems shard data items over multiple hosts and our batch jobs execute over clusters of thousands of workers, to scale and speed-up the computation. Our VLDB'17 paper, Social Hash Partitioner: A Scalable Distributed Hypergraph Partitioner, describes a new method for partitioning bipartite graphs while minimizing fan-out. We describe the resulting framework as a Social Hash Partitioner (SHP) because it can be used as the hypergraph partitioning component of the Social Hash framework introduced in our earlier NSDI'16 paper. The fan-out reduction model is applicable to many infrastructure optimization problems at Facebook, like data sharding, query routing and index compression.


Artificial Intelligence tunes Azure SQL Databases

#artificialintelligence

Over the last few years, SnelStart has worked closely with the SQL Server product team to leverage the Azure SQL Database platform to improve performance and reduce DevOps costs. Automatic tuning focuses on each database individually, monitors its workload pattern, and applies tuning recommendations to each individual database based on its unique workload. Since enabling automatic tuning, the SQL Database service has executed 3345 tuning actions on 1410 unique databases and improving 1730 unique queries across these databases. Microsoft is enabling automatic tuning on all internal workloads, including Microsoft IT, to reduce the DevOps cost and improve the performance across applications that are relying on Azure SQL Database.


Reports Say Fujitsu, Huawei Developing Artificial Intelligence Chips

#artificialintelligence

System makers Fujitsu and Huawei Technologies reportedly are both planning to develop processors optimized for artificial intelligence workloads, moves that will put them into competition with the likes of Intel, Google, Nvidia and Advanced Micro Devices. Tech vendors are pushing hard to bring artificial intelligence (AI) and deep learning capabilities into their portfolios to meet the growing demand generated by a broad range of workloads, from data analytics to self-driving vehicles. Fujitsu engineers for the past couple of years have been working on what the company is calling a deep learning unit (DLU), but last month gave more details on the component during the International Supercomputing show. The chip reportedly will include 16 deep learning processing elements, with each of them housing eight single-instruction, multiple data execution units.