Goto

Collaborating Authors

Results


Three opportunities of Digital Transformation: AI, IoT and Blockchain

#artificialintelligence

Koomey's law This law posits that the energy efficiency of computation doubles roughly every one-and-a-half years (see Figure 1–7). In other words, the energy necessary for the same amount of computation halves in that time span. To visualize the exponential impact this has, consider the face that a fully charged MacBook Air, when applying the energy efficiency of computation of 1992, would completely drain its battery in a mere 1.5 seconds. According to Koomey's law, the energy requirements for computation in embedded devices is shrinking to the point that harvesting the required energy from ambient sources like solar power and thermal energy should suffice to power the computation necessary in many applications. Metcalfe's law This law has nothing to do with chips, but all to do with connectivity. Formulated by Robert Metcalfe as he invented Ethernet, the law essentially states that the value of a network increases exponentially with regard to the number of its nodes (see Figure 1–8).


Oltramari

AAAI Conferences

In this position paper we discuss the benefits of combining knowledge technologies and deep learning (DL) for audio analytics: knowledge can enable high-level reasoning, helping to scale up intelligent systems from sound recognition to event analysis. We will also argue that a knowledge-integrated DL framework is key to enable smart environments.


Brainchip ships first mini PCIexpress board with spiking neural network chip

#artificialintelligence

Brainchip has begun taking orders for the first commercially available Mini PCIe board using its Akida advanced neural networking processor. The $499 AKD1000-powered Mini PCIe boards can be plugged into a developer's existing system to unlock capabilities for a wide array of edge AI applications, including Smart City, Smart Health, Smart Home and Smart Transportation. BrainChip will also offer the full PCIe design layout files and the bill of materials (BOM) to system integrators and developers to enable them to build their own boards and implement AKD1000 chips in volume as a stand-alone embedded accelerator or as a co-processor. The boards provide the ability to perform AI training and learning on the device itself, without dependency on the cloud. The production-ready chips provide high-speed neuromorphic processing of sensor data at a low cost, high speed and very low power consumption with in-built security.


10 AI Technologies for Your Business in 2022

#artificialintelligence

When it comes to technological breakthroughs, the world has come a long way from the abacus to quantum computers. A hundred years ago, the globe was heavily reliant on manual labour. Even simple procedures like arithmetic operations took a long time and were tiresome. Various technologies capable of doing complicated calculations were introduced when this challenge was recognised. These technologies advanced quickly, and the world immediately recognised their promise. Calculations have become faster and more precise.


Artificial Intelligence of Things (AIoT) - Trends and Applications in 2022 - viso.ai

#artificialintelligence

The accelerating convergence of artificial intelligence (AI) and Internet of Things (IoT) has sparked a recent wave of interest in Artificial Intelligence of Things (AIoT). This article covers everything you need to know about the basics of the Artificial Intelligence of Things. We will discuss how emerging technology drives the development of disruptive applications, software, sensors, and systems. AIoT stands for Artificial Intelligence of Things; it combines the connectivity from the Internet of Things (IoT) with the data-driven knowledge obtained from Artificial Intelligence (AI). This emerging technology is based on the integration of Artificial Intelligence in IoT infrastructure.


#CES2022 Twitter NodeXL SNA Map and Report for Monday, 03 January 2022 at 20:34 UTC

#artificialintelligence

The graph represents a network of 6,360 Twitter users whose recent tweets contained "#CES2022", or who were replied to or mentioned in those tweets, taken from a data set limited to a maximum of 18,000 tweets. The network was obtained from Twitter on Monday, 03 January 2022 at 21:39 UTC. The tweets in the network were tweeted over the 1-day, 4-hour, 48-minute period from Sunday, 02 January 2022 at 15:45 UTC to Monday, 03 January 2022 at 20:34 UTC. Additional tweets that were mentioned in this data set were also collected from prior time periods. These tweets may expand the complete time period of the data.


A Comprehensive Survey on Radio Frequency (RF) Fingerprinting: Traditional Approaches, Deep Learning, and Open Challenges

arXiv.org Artificial Intelligence

Fifth generation (5G) networks and beyond envisions massive Internet of Things (IoT) rollout to support disruptive applications such as extended reality (XR), augmented/virtual reality (AR/VR), industrial automation, autonomous driving, and smart everything which brings together massive and diverse IoT devices occupying the radio frequency (RF) spectrum. Along with spectrum crunch and throughput challenges, such a massive scale of wireless devices exposes unprecedented threat surfaces. RF fingerprinting is heralded as a candidate technology that can be combined with cryptographic and zero-trust security measures to ensure data privacy, confidentiality, and integrity in wireless networks. Motivated by the relevance of this subject in the future communication networks, in this work, we present a comprehensive survey of RF fingerprinting approaches ranging from a traditional view to the most recent deep learning (DL) based algorithms. Existing surveys have mostly focused on a constrained presentation of the wireless fingerprinting approaches, however, many aspects remain untold. In this work, however, we mitigate this by addressing every aspect - background on signal intelligence (SIGINT), applications, relevant DL algorithms, systematic literature review of RF fingerprinting techniques spanning the past two decades, discussion on datasets, and potential research avenues - necessary to elucidate this topic to the reader in an encyclopedic manner.



Tiny machine learning design alleviates a bottleneck in memory usage on internet-of-things devices

#artificialintelligence

Machine learning provides powerful tools to researchers to identify and predict patterns and behaviors, as well as learn, optimize, and perform tasks. This ranges from applications like vision systems on autonomous vehicles or social robots to smart thermostats to wearable and mobile devices like smartwatches and apps that can monitor health changes. While these algorithms and their architectures are becoming more powerful and efficient, they typically require tremendous amounts of memory, computation, and data to train and make inferences. At the same time, researchers are working to reduce the size and complexity of the devices that these algorithms can run on, all the way down to a microcontroller unit (MCU) that's found in billions of internet-of-things (IoT) devices. An MCU is memory-limited minicomputer housed in compact integrated circuit that lacks an operating system and runs simple commands.


Combining Embeddings and Fuzzy Time Series for High-Dimensional Time Series Forecasting in Internet of Energy Applications

arXiv.org Artificial Intelligence

The prediction of residential power usage is essential in assisting a smart grid to manage and preserve energy to ensure efficient use. An accurate energy forecasting at the customer level will reflect directly into efficiency improvements across the power grid system, however forecasting building energy use is a complex task due to many influencing factors, such as meteorological and occupancy patterns. In addiction, high-dimensional time series increasingly arise in the Internet of Energy (IoE), given the emergence of multi-sensor environments and the two way communication between energy consumers and the smart grid. Therefore, methods that are capable of computing high-dimensional time series are of great value in smart building and IoE applications. Fuzzy Time Series (FTS) models stand out as data-driven non-parametric models of easy implementation and high accuracy. Unfortunately, the existing FTS models can be unfeasible if all features were used to train the model. We present a new methodology for handling high-dimensional time series, by projecting the original high-dimensional data into a low dimensional embedding space and using multivariate FTS approach in this low dimensional representation. Combining these techniques enables a better representation of the complex content of multivariate time series and more accurate forecasts.