Collaborating Authors


Global Deep Learning Market To Show Startling Growth During Forecast Period 2020–2026 – Zion Market Research - re:Jerusalem


The global Deep Learning market is expected to rise with an impressive CAGR and generate the highest revenue by 2026. Zion Market Research in its latest report published this information. The report is titled "Global Deep Learning Market 2020 With Top Countries Data, Revenue, Key Developments, SWOT Study, COVID-19 impact Analysis, Growth and Outlook To 2026". It also offers an exclusive insight into various details such as revenues, market share, strategies, growth rate, product & their pricing by region/country for all major companies. The report provides a 360-degree overview of the market, listing various factors restricting, propelling, and obstructing the market in the forecast duration. The report also provides additional information such as interesting insights, key industry developments, detailed segmentation of the market, list of prominent players operating in the market, and other Deep Learning market trends.

On-device AI: Mobile artificial intelligence Samsung Exynos


Equipped with best-in-class AI solutions, Samsung Exynos processors enable users to enjoy the next-generation mobile experiences. Launched in 2018, Exynos 9810 was the first processor in the series with deep learning software. With the addition of neural processing unit integration, the Exynos series delivers unmatched performance for mobile AI operations. The newly introduced Exynos 990 processor, featuring dual-core neural processing unit (NPU) and improved digital signal processor (DSP), makes on-device AI practical through faster AI processing capabilities up to approx. By developing algorithms that are four times lighter and eight times faster than existing solutions, Samsung Exynos will continuously set a new standard for AI processing to push the boundaries of the next generation mobile experience.

RISC-V business: SiFive and CEVA join forces to enable the development AI-amenable, edge-oriented processors


On Tuesday, RISC-V CPU fixer SiFive announced it's working with CEVA, which licenses technology for deep learning, audio, and computer vision, to simplify the creation of processors capable of handling machine learning code without demanding too much power. RISC-V is an open, royalty-free instruction set architecture, unlike Intel's x86 chip architecture which requires a license to implement recent processor designs. SiFive provides clients with access to the necessary intellectual property licenses to create custom silicon with minimal negotiation and hassle. CEVA does much the same in a more specific set of domains. SiFive and CEVA anticipate that making it easy to design low-power SoCs tuned for AI-oriented tasks will attract hardware vendors looking to sell RISC-based hardware for applications demanding on-device neural networks, like imaging, computer vision, speech recognition, and sensor data handling.

Why Intel Acquired Habana


Intel Corporation this week announced that it has acquired Habana Labs for approximately $2 billion. Habana is an Israel-based company that develops programmable deep learning accelerators for the data centre. This acquisition is aimed at strengthening Intel's artificial intelligence portfolio and accelerate its efforts in the AI silicon market, which Intel expects to be greater than $25 billion by 2024. "This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need – from the intelligent edge to the data centre," said Navin Shenoy, executive VP at Intel, in a press release. In July, Habana announced its Gaudi AI training processor, which the Tel Aviv startup promised was capable of beating GPU-based systems by 4x.

AI Hardware Built from a Software-first Perspective: Groq's Flexible Silicon Architecture - News


Semiconductor industry startups are usually founded by hardware engineers who develop a silicon architecture and then figure out how to map software for that specific hardware. Here is a tale of a chip startup founded in the age of artificial intelligence (AI) that has a software DNA. Groq was founded in 2016 by a group of software engineers who wanted to solve AI problems from the software side. When they approached the issue without any preconceptions of what an AI architecture may need to look like, they were able to create an architecture that can be mapped to different AI models. The company is focused on the inference market for data centers and autonomous vehicles, and its first product is a PCIe plug-in card for which Groq designed the ASIC and AI accelerator and developed the software stack.

Learning a faceted customer segmentation for discovering new business opportunities at Intel Machine Learning

For sales and marketing organizations within large enterprises, identifying and understanding new markets, customers and partners is a key challenge. Intel's Sales and Marketing Group (SMG) faces similar challenges while growing in new markets and domains and evolving its existing business. In today's complex technological and commercial landscape, there is need for intelligent automation supporting a fine-grained understanding of businesses in order to help SMG sift through millions of companies across many geographies and languages and identify relevant directions. We present a system developed in our company that mines millions of public business web pages, and extracts a faceted customer representation. We focus on two key customer aspects that are essential for finding relevant opportunities: industry segments (ranging from broad verticals such as healthcare, to more specific fields such as 'video analytics') and functional roles (e.g., 'manufacturer' or 'retail'). To address the challenge of labeled data collection, we enrich our data with external information gleaned from Wikipedia, and develop a semi-supervised multi-label, multi-lingual deep learning model that parses customer website texts and classifies them into their respective facets. Our system scans and indexes companies as part of a large-scale knowledge graph that currently holds tens of millions of connected entities with thousands being fetched, enriched and connected to the graph by the hour in real time, and also supports knowledge and insight discovery. In experiments conducted in our company, we are able to significantly boost the performance of sales personnel in the task of discovering new customers and commercial partnership opportunities.

Top 25 AI chip companies: A macro step change inferred from the micro scale


One of the effects of the ongoing trade war between the US and China is likely to be the accelerated development of what are being called "artificial intelligence chips", or AI chips for short, also sometimes referred to as AI accelerators. AI chips could play a critical role in economic growth going forward because they will inevitably feature in cars, which are becoming increasingly autonomous; smart homes, where electronic devices are becoming more intelligent; robotics, obviously; and many other technologies. AI chips, as the term suggests, refers to a new generation of microprocessors which are specifically designed to process artificial intelligence tasks faster, using less power. Obvious, you might think, but some might wonder what the difference between an AI chip and a regular chip would be when all chips of any type process zeros and ones – a typical processor, after all, is actually capable of AI tasks. Graphics-processing units are particularly good at AI-like tasks, which is why they form the basis for many of the AI chips being developed and offered today. Without getting out of our depth, while a general microprocessor is an all-purpose system, AI processors are embedded with logic gates and highly parallel calculation systems that are more suited to typical AI tasks such as image processing, machine vision, machine learning, deep learning, artificial neural networks, and so on. Maybe one could use cars as metaphors. A general microprocessor is your typical family car that might have good speed and steering capabilities.

Intel flexes AI processing muscle


Cloud and datacenter architects searching for new ways to pack more artificial intelligence horsepower into already constrained spaces will want to take a close look at Intel's new Nervana Neural Network Processors. Depending on the application, the processors may offer four times the performance or one-fifth the power draw as commercially available alternatives. The new processors are Intel's first ASIC offerings tailored specifically for deep learning workloads. The company announced last week the processors are shipping now. In addition to the NNP-T1000 for training and the NNP-I1000 for inference, Intel also announced the coming generation of the Movidius Myriad Vision Processing Unit, which is designed for AI vision and inference processing at the edge.



At Hot Chips 2019, Intel revealed new details of upcoming high-performance artificial intelligence (AI) accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. Myriad X is the first VPU to feature the Neural Compute Engine - a dedicated hardware accelerator for running on-device deep neural network applications. Interfacing directly with other key components via the intelligent memory fabric, the Neural Compute Engine is able to deliver industry leading performance per Watt without encountering common data flow bottlenecks encountered by other architectures. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), announced that it is bringing the Company's artificial intelligence (AI) expertise to the cloud with the Qualcomm Cloud AI 100. Built from the ground up to meet the explosive demand for AI inference processing in the cloud, the Qualcomm Cloud AI 100 utilizes the Company's heritage in advanced signal processing and power efficiency. Our 4th generation on-device AI engine is the ultimate personal assistant for camera, voice, XR and gaming – delivering smarter, faster and more secure experiences. Utilizing all cores, it packs 3 times the power of its predecessor for stellar on-device AI capabilities. With the open-source release of NVDLA's optimizing compiler on GitHub, system architects and software teams now have a starting point with the complete source for the world's first fully open software and hardware inference platform. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch.

A List of Chip/IP for Deep Learning


Machine Learning, especially Deep Learning technology is driving the evolution of artificial intelligence (AI). At the beginning, deep learning has primarily been a software play. Start from the year 2016, the need for more efficient hardware acceleration of AI/ML/DL was recognized in academia and industry. This year, we saw more and more players, including world's top semiconductor companies as well as a number of startups, even tech giants Google, have jumped into the race. I believe that it could be very interesting to look at them together. So, I build this list of AI/ML/DL ICs and IPs on Github and keep updating. If you have any suggestion or new information, please let me know. The companies and products in the list are organized into five categories as shown in the following table. Intel purchased Nervana Systems who was developing both a GPU/software approach in addition to their Nervana Engine ASIC. Intel is also planning in integrating into the Phi platform via a Knights Crest project.