Intel unveils broad Xeon stack with dozens of workload-optimized processors

ZDNet

The Intel Xeon Family (from left): Intel Xeon Platinum 9200 processor, 2nd-Gen Intel Xeon Scalable Processor and Intel Xeon D-1600 Processor. Intel Corporation on April 2, 2019, introduced a portfolio of data-centric tools to help its customers extract more value from their data. Intel on Tuesday announced its broadest portfolio of Xeon processors to date, including more than 50 workload-optimized processors. The new Xeon chips, along with other new chips, memory and storage solutions, are all part of Intel's strategy to transform from a "PC-centric" company into a "data-centric" company. The products announced Tuesday amount to an "unmatched portfolio to move, store and process data," Navin Shenoy, Intel EVP and GM of the Data Center Group, said at a launch event.


Intel's present and future AI chip business

#artificialintelligence

The future of Intel is AI. Its books imply as much. The Santa Clara company's AI chip segments notched $1 billion in revenue last year, and Intel expects the market opportunity to grow 30% annually from $2.5 billion in 2017 to $10 billion by 2022. Putting this into perspective, its data-centric revenues now constitute around half of all business across all divisions, up from around a third five years ago. Still, increased competition from the likes of incumbents Nvidia, Qualcomm, Marvell, and AMD; startups like Hailo Technologies, Graphcore, Wave Computing, Esperanto, and Quadric; and even Amazon threaten to slow Intel's gains, which is why the company isn't resting on its laurels.


Intel adds new programming tools to speed adoption of FPGA custom chips for AI - SiliconANGLE

#artificialintelligence

Intel Corp. has spent a lot of effort pushing the concept of its Field Programmable Gate Arrays, which can be used to accelerate various computing tasks such as artificial intelligence and machine learning. FPGAs are a special kind of microprocessor used in data centers that can be reprogrammed on the fly and in real-time, in order to optimize them for specific tasks. The problem is that although they can be incredibly useful, they're very difficult to reprogram in the first place, because they require expertise in little-known programming languages such as Verilog or VHDL. To make its FPGAs more accessible and speed up adoption, Intel released a set of new software tools earlier this week that are designed to make them easier to program. The idea is to make FPGAs easier for mainstream developers to use, in order to increase their adoption in the data center for workloads such as high-performance computing, artificial intelligence, data and video analytics, and 5G network processing.


Intel Unveils FPGA to Accelerate Neural Networks

#artificialintelligence

Intel today unveiled new hardware and software targeting the artificial intelligence (AI) market, which has emerged as a focus of investment for the largest data center operators. The chipmaker introduced an FPGA accelerator that offers more horsepower for companies developing new AI-powered services. The Intel Deep Learning Inference Accelerator (DLIA) combines traditional Intel CPUs with field programmable gate arrays (FPGAs), semiconductors that can be reprogrammed to perform specialized computing tasks. FPGAs allow users to tailor compute power to specific workloads or applications. The DLIA is the first hardware product emerging from Intel's $16 billion acquisition of Altera last year.


Intel Gears Up For FPGA Push

#artificialintelligence

Chip giant Intel has been talking about CPU-FPGA compute complexes for so long that it is hard to remember sometimes that its hybrid Xeon-Arria compute unit, which puts a Xeon server chip and a midrange FPGA into a single Xeon processor socket, is not shipping as a volume product. But Intel is working to get it into the field and has given The Next Platform an update on the current plan. The hybrid CPU-FPGA devices, which are akin to AMD's Accelerated Computing Units, or APUs, in that they put compute and, in this case, GPU acceleration into a single processor package, are expected to see widespread adoption, particularly among hyperscalers and cloud builders who want to offload certain kinds of work from the CPU to an accelerator. While Intel has GPUs of its own and it puts them in a CPU package or on the CPU die for certain parts of the market – low-end workstations and low-end servers based on the Xeon E3 chip that are used to accelerate media processing and such – Intel is not enthusiastic about offloading work from its Xeon processors to other devices. It created the "Knights" family of parallel X86 processors first as an offload engine and then as a full processor in its own right with the "Knights Landing" Xeon Phi 7200 series that saw initial shipments in late 2015 and formally launched in the summer of 2016.