Collaborating Authors


Tech designed to aid visually impaired could benefit from human-AI collaboration


Remote sighted assistance (RSA) technology--which connects visually impaired individuals with human agents through a live video call on their smartphones--helps people with low or no vision navigate tasks that require sight. But what happens when existing computer vision technology doesn't fully support an agent in fulfilling certain requests, such as reading instructions on a medicine bottle or recognizing flight information on an airport's digital screen? According to researchers at the Penn State College of Information Sciences and Technology, there are some challenges that cannot be solved with existing computer vision techniques. Instead, the researchers posit that they would be better addressed by humans and AI working together to improve the technology and enhance the experience for both visually impaired users and the agents who support them. In a recent study presented at the 27th International Conference on Intelligent User Interfaces (IUI) in March, the researchers highlighted five emerging problems with RSA that they say warrant new development in human-AI collaboration.

Study Demonstrates that Cardiologs' AI Dramatically Reduces Inconclusive Apple Watch ECGs - Cardiologs


Data being presented at the EHRA 2022 shows Cardiologs' deep neural network model is better than Apple Watch ECG 2.0 algorithm at detecting & classifying irregular heart rhythms "Wearable devices, such as the Apple Watch, are capable of recording a single-lead ECG to determine cardiac rhythm. However, a large proportion of the smartwatch readings come back as inconclusive. This problem was solved using Cardiologs' AI algorithm. The data showed Cardiologs' AI performed equally as well but, remarkably, the results were almost never inconclusive," said Dr. Laurent Fiorina. The study included 101 patients in a typical tertiary care hospital who were assessed for atrial arrhythmia (AA).

Deep Learning for Human Activity Recognition


This is a huge research-based field where you can recognize the human activity with the help of concepts, methods and functionalities of Deep Learning. In this article we will talking one of the research done for Human Activity Recognition using Wearable Sensors and reviews, challenges, evaluation benchmark faced for the similar. Abstract: Recognizing human activity is important for human-interaction applications in healthcare, personal fitness, and smart gadgets to improve. Many papers discussed various techniques for representing human activity that resulted in discernible progress. This article, provides a comprehensive assessment of contemporary, high-performing approaches for recognizing human movement using wearable sensors.

Engineers build a lower-energy chip that can prevent hackers from extracting hidden information from a smart device


A heart attack patient, recently discharged from the hospital, is using a smartwatch to help monitor his electrocardiogram signals. The smartwatch may seem secure, but the neural network processing that health information is using private data that could still be stolen by a malicious agent through a side-channel attack. A side-channel attack seeks to gather secret information by indirectly exploiting a system or its hardware. In one type of side-channel attack, a savvy hacker could monitor fluctuations in the device's power consumption while the neural network is operating to extract protected information that "leaks" out of the device. "In the movies, when people want to open locked safes, they listen to the clicks of the lock as they turn it. That reveals that probably turning the lock in this direction will help them proceed further. That is what a side-channel attack is. It is just exploiting unintended information and using it to predict what is going on inside the device," says Saurav Maji, a graduate student in MIT's Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper that tackles this issue.

Uncovering Instabilities in Variational-Quantum Deep Q-Networks Artificial Intelligence

Deep Reinforcement Learning (RL) has considerably advanced over the past decade. At the same time, state-of-the-art RL algorithms require a large computational budget in terms of training time to converge. Recent work has started to approach this problem through the lens of quantum computing, which promises theoretical speed-ups for several traditionally hard tasks. In this work, we examine a class of hybrid quantumclassical RL algorithms that we collectively refer to as variational quantum deep Q-networks (VQ-DQN). We show that VQ-DQN approaches are subject to instabilities that cause the learned policy to diverge, study the extent to which this afflicts reproduciblity of established results based on classical simulation, and perform systematic experiments to identify potential explanations for the observed instabilities. Additionally, and in contrast to most existing work on quantum reinforcement learning, we execute RL algorithms on an actual quantum processing unit (an IBM Quantum Device) and investigate differences in behaviour between simulated and physical quantum systems that suffer from implementation deficiencies. Our experiments show that, contrary to opposite claims in the literature, it cannot be conclusively decided if known quantum approaches, even if simulated without physical imperfections, can provide an advantage as compared to classical approaches. Finally, we provide a robust, universal and well-tested implementation of VQ-DQN as a reproducible testbed for future experiments.

GEMEL: Model Merging for Memory-Efficient, Real-Time Video Analytics at the Edge Artificial Intelligence

Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) models for real-time inference. Unfortunately, existing solutions that rely on time/space sharing of GPU resources are insufficient as the required swapping delays result in unacceptable frame drops and accuracy violations. We present model merging, a new memory management technique that exploits architectural similarities between edge vision models by judiciously sharing their layers (including weights) to reduce workload memory costs and swapping delays. Our system, GEMEL, efficiently integrates merging into existing pipelines by (1) leveraging several guiding observations about per-model memory usage and inter-layer dependencies to quickly identify fruitful and accuracy-preserving merging configurations, and (2) altering edge inference schedules to maximize merging benefits. Experiments across diverse workloads reveal that GEMEL reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39% relative to time/space sharing alone.

Forecasting: theory and practice Machine Learning

Forecasting has always been at the forefront of decision making and planning. The uncertainty that surrounds the future is both exciting and challenging, with individuals and organisations seeking to minimise risks and maximise utilities. The large number of forecasting applications calls for a diverse set of forecasting methods to tackle real-life challenges. This article provides a non-systematic review of the theory and the practice of forecasting. We provide an overview of a wide range of theoretical, state-of-the-art models, methods, principles, and approaches to prepare, produce, organise, and evaluate forecasts. We then demonstrate how such theoretical concepts are applied in a variety of real-life contexts. We do not claim that this review is an exhaustive list of methods and applications. However, we wish that our encyclopedic presentation will offer a point of reference for the rich work that has been undertaken over the last decades, with some key insights for the future of forecasting theory and practice. Given its encyclopedic nature, the intended mode of reading is non-linear. We offer cross-references to allow the readers to navigate through the various topics. We complement the theoretical concepts and applications covered by large lists of free or open-source software implementations and publicly-available databases.

Artificial intelligence simulates microprocessor performance in real-time-Mis-aisa-The latest News,Tech,Industry,Environment,Low Carbon,Resource,Innovations.


This approach is detailed in a paper presented at MICRO-54: the 54th IEEE/ACM International Symposium on MicroArchitecture.Micro-54 is one of the top conferences in the field of computer architecture and was selected as the conference's best publication. "This is a problem that needs to be studied in-depth and has traditionally relied on additional circuits to solve it," said Zhiyao Xie, lead author of the paper and a doctoral candidate in the lab of Yiran Chen, a professor of electrical and computer engineering at Duke."But our approach runs directly on microprocessors in the background, which opens up a lot of new opportunities. I think that's why people are excited about it." In modern computer processors, the computation cycle is 3 trillion times per second. Tracking the energy consumed for such a fast conversion is important to maintaining the performance and efficiency of the entire chip.

Exploring the Impact of Virtualization on the Usability of the Deep Learning Applications Artificial Intelligence

Deep Learning-based (DL) applications are becoming increasingly popular and advancing at an unprecedented pace. While many research works are being undertaken to enhance Deep Neural Networks (DNN) -- the centerpiece of DL applications -- practical deployment challenges of these applications in the Cloud and Edge systems, and their impact on the usability of the applications have not been sufficiently investigated. In particular, the impact of deploying different virtualization platforms, offered by the Cloud and Edge, on the usability of DL applications (in terms of the End-to-End (E2E) inference time) has remained an open question. Importantly, resource elasticity (by means of scale-up), CPU pinning, and processor type (CPU vs GPU) configurations have shown to be influential on the virtualization overhead. Accordingly, the goal of this research is to study the impact of these potentially decisive deployment options on the E2E performance, thus, usability of the DL applications. To that end, we measure the impact of four popular execution platforms (namely, bare-metal, virtual machine (VM), container, and container in VM) on the E2E inference time of four types of DL applications, upon changing processor configuration (scale-up, CPU pinning) and processor types. This study reveals a set of interesting and sometimes counter-intuitive findings that can be used as best practices by Cloud solution architects to efficiently deploy DL applications in various systems. The notable finding is that the solution architects must be aware of the DL application characteristics, particularly, their pre- and post-processing requirements, to be able to optimally choose and configure an execution platform, determine the use of GPU, and decide the efficient scale-up range.

MAPLE: Microprocessor A Priori for Latency Estimation Artificial Intelligence

Modern deep neural networks must demonstrate state-of-the-art accuracy while exhibiting low latency and energy consumption. As such, neural architecture search (NAS) algorithms take these two constraints into account when generating a new architecture. However, efficiency metrics such as latency are typically hardware dependent requiring the NAS algorithm to either measure or predict the architecture latency. Measuring the latency of every evaluated architecture adds a significant amount of time to the NAS process. Here we propose Microprocessor A Priori for Latency Estimation MAPLE that does not rely on transfer learning or domain adaptation but instead generalizes to new hardware by incorporating a prior hardware characteristics during training. MAPLE takes advantage of a novel quantitative strategy to characterize the underlying microprocessor by measuring relevant hardware performance metrics, yielding a fine-grained and expressive hardware descriptor. Moreover, the proposed MAPLE benefits from the tightly coupled I/O between the CPU and GPU and their dependency to predict DNN latency on GPUs while measuring microprocessor performance hardware counters from the CPU feeding the GPU hardware. Through this quantitative strategy as the hardware descriptor, MAPLE can generalize to new hardware via a few shot adaptation strategy where with as few as 3 samples it exhibits a 3% improvement over state-of-the-art methods requiring as much as 10 samples. Experimental results showed that, increasing the few shot adaptation samples to 10 improves the accuracy significantly over the state-of-the-art methods by 12%. Furthermore, it was demonstrated that MAPLE exhibiting 8-10% better accuracy, on average, compared to relevant baselines at any number of adaptation samples.