Goto

Collaborating Authors

processing unit


Powerful Photon-Based Processing Units Enable Complex Artificial Intelligence

#artificialintelligence

The photonic tensor core performs vector-matrix multiplications by utilizing the efficient interaction of light at different wavelengths with multistate photonic phase change memories. Using photons to create more powerful and power-efficient processing units for more complex machine learning. Machine learning performed by neural networks is a popular approach to developing artificial intelligence, as researchers aim to replicate brain functionalities for a variety of applications. A paper in the journal Applied Physics Reviews, by AIP Publishing, proposes a new approach to perform computations required by a neural network, using light instead of electricity. In this approach, a photonic tensor core performs multiplications of matrices in parallel, improving speed and efficiency of current deep learning paradigms.


BriefCam Takes Innovation to the Edge with Analytics for AXIS Deep Learning Camera Series -- Security Today

#artificialintelligence

BriefCam today announced future availability for BriefCam Video Content Analytics on Axis cameras with built-in deep learning processing units. BriefCam's edge analytics initiative complements its portfolio of on-premise and cloud solutions, by enabling greater freedom of choice for flexible deployment architectures through edge-based computing. Through the Axis Application Development Partner Program, BriefCam is one of the first to leverage the AXIS Camera Application Platform (ACAP) to enable comprehensive analytics directly on Axis Communications' upgraded camera series. The first camera to support BriefCam video content analytics is the AXIS Q1615 Mk III featuring a dual chipset, ARTPEC-7, and a deep-learning processing unit (DLPU), for video processing and metadata generation at the edge. By enabling BriefCam analytics on the edge, along with post processing and management capabilities, users experience real-time processing, with reduced costs and complexity, as well as reduced storage and bandwidth requirements. "Axis is proud to forge a deeper technology partnership with BriefCam toward our shared vision for advancing best-in-class video surveillance technologies," said Mats Thulin, director of core technology, Axis Communications AB. "Comprehensive video analytics is a key component to further optimizing surveillance camera investments and enabling new and expanded use cases for video – by deploying analytics at the edge, users have greater flexibility in how they implement and use video analytics."


Machines can learn unsupervised 'at speed of light' after AI breakthrough, scientists say

#artificialintelligence

Researchers have achieved a breakthrough in the development of artificial intelligence by using light instead of electricity to perform computations. The new approach significantly improves both the speed and efficiency of machine learning neural networks – a form of AI that aims to replicate the functions performed by a human brain in order to teach itself a task without supervision. Current processors used for machine learning are limited in performing complex operations by the power required to process the data. Such networks are also limited by the slow transmission of electronic data between the processor and the memory. Researchers from George Washington University in the US discovered that using photons within neural network (tensor) processing units (TPUs) could overcome these limitations and create more powerful and power-efficient AI.


Classifying The Modern Edge Computing Platforms

#artificialintelligence

A decade ago, edge computing meant delivering static content through a distributed content delivery network (CDN). Akamai, Limelight Networks, Cloudflare and Fastly are some of the examples of CDN services. They provide high availability and performance by distributing and caching the content closer to the end user's location. The definition of the edge has changed significantly over the last five years. Today, an edge represents more than a CDN or a compute layer.


55

#artificialintelligence

Can artificial intelligence be deployed to slow down global warming, or is AI one of the greatest climate sinners ever? That is the interesting debate that finds (not surprisingly) representatives from the AI industry and academia on opposite sides of the issue. While PwC and Microsoft published a report concluding that using AI could reduce world-wide greenhouse gas emissions by 4% in 2030, researchers from the University of Amherst Massachusetts have calculated that training a single AI model can emit more than 626,000 pounds of carbon dioxide equivalent--nearly five times the lifetime emissions of the average American car. The big players have clearly understood that the public sensibility towards climate change offers a wonderful marketing opportunity. IBM has launched its Green Horizons project to analyze environmental data and predict pollution.


EETimes - Chip Startups for AI in Edge and Endpoint Applications

#artificialintelligence

As the industry grapples with the best way to accelerate AI performance to keep up with requirements from cutting-edge neural networks, there are many startup companies springing up around the world with new ideas about how this is best achieved. This sector is attracting a lot of venture capital funding and the result is a sector rich in not just cash, but in novel ideas for computing architectures. Here at EETimes we are currently tracking around 60 AI chip startups in the US, Europe and Asia, from companies reinventing programmable logic and multi-core designs, to those developing their own entirely new architectures, to those using futuristic technologies such as neuromorphic (brain-inspired) architectures and and optical computing. Here is a snapshot of ten we think show promise, or at the very least, have some interesting ideas. We've got them categorized by where in the network their products are targeted: data centers, endpoints, or AIoT devices.


The Different Types Of Hardware AI Accelerators

#artificialintelligence

An AI accelerator is a kind of specialised hardware accelerator or computer system created to accelerate artificial intelligence apps, particularly artificial neural networks, machine learning, robotics, and other data-intensive or sensor-driven tasks. They usually have novel designs and typically focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As deep learning and artificial intelligence workloads grew in prominence in the last decade, specialised hardware units were designed or adapted from existing products to accelerate these tasks, and to have parallel high-throughput systems for workstations targeted at various applications, including neural network simulations. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors. Hardware acceleration has many advantages, the main being speed. Accelerators can greatly decrease the amount of time it takes to train and execute an AI model, and can also be used to execute special AI-based tasks that cannot be conducted on a CPU.


Neural Networks

#artificialintelligence

In its most general form, a neural network is a machine that is designed to model the way in which the brain performs a particular task or function of interest; the network is usually implemented by using electronic components or is simulated in software on a digital computer. To achieve good performance, neural networks employ a massive interconnection of simple computing cells referred to as "neurons", perceptrons or "processing units". A neural network is a massively parallel distributed processor made up of simple processing units that has a natural propensity for storing experiential knowledge and making it available for use. Like in human brain, the receptors convert stimuli from the human body or the external environment into electrical impulses that convey information to the neural net (brain). The effectors convert electrical impulses generated by the neural net into discernible responses as system outputs.


Quantum computing: A key ally for meeting business objectives

MIT Technology Review

In the business world, the opportunities for applying quantum technology relate to optimization: solving difficult business problems, reconfiguring complex processes, and understanding correlations between seemingly disparate data sets. The main purpose of quantum computing is to carry out computationally costly operations in a very short period of time, while at the same time accelerating business performance. Quantum computing can optimize business processes for any number of solutions, for example maximizing cost/benefit ratios or optimizing financial assets, operations and logistics, and workforce management--usually delivering immediate financial gains. Many businesses are already using (or planning to use) classic optimization algorithms. And with four international case studies, Reply has proven that a quantum approach can give better results than existing optimization techniques. Speed and computational power are key components when working with data.


5 Deep Learning Challenges To Watch Out For

#artificialintelligence

From your Google voice assistant to your'Netflix and chill' recommendations to the very humble Grammarly -- they're all powered by deep learning. Deep learning has become one of the primary research areas in artificial intelligence. Most of the well-known applications of artificial intelligence, such as image processing, speech recognition and translations, and object identification are carried out by deep learning. Thus, deep learning has the potential to solve most business problems, streamlining your work procedures, or creating useful products for end customers. However, there are certain deep learning challenges that you should be aware of, before going ahead with business decisions involving deep learning.