Goto

Collaborating Authors

Mobile: Overviews


A Survey on Edge Intelligence

arXiv.org Artificial Intelligence

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.


Health State Estimation

arXiv.org Artificial Intelligence

Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.


FLAME: A Self-Adaptive Auto-labeling System for Heterogeneous Mobile Processors

arXiv.org Machine Learning

How to accurately and efficiently label data on a mobile device is critical for the success of training machine learning models on mobile devices. Auto-labeling data on mobile devices is challenging, because data is usually incrementally generated and there is possibility of having unknown labels. Furthermore, the rich hardware heterogeneity on mobile devices creates challenges on efficiently executing auto-labeling workloads. In this paper, we introduce Flame, an auto-labeling system that can label non-stationary data with unknown labels. Flame includes a runtime system that efficiently schedules and executes auto-labeling workloads on heterogeneous mobile processors. Evaluating Flame with eight datasets on a smartphone, we demonstrate that Flame enables auto-labeling with high labeling accuracy and high performance.


From Data to Actions in Intelligent Transportation Systems: a Prescription of Functional Requirements for Model Actionability

arXiv.org Artificial Intelligence

Advances in Data Science are lately permeating every field of Transportation Science and Engineering, making it straightforward to imagine that developments in the transportation sector will be data-driven. Nowadays, Intelligent Transportation Systems (ITS) could be arguably approached as a "story" intensively producing and consuming large amounts of data. A diversity of sensing devices densely spread over the infrastructure, vehicles or the travelers' personal devices act as sources of data flows that are eventually fed to software running on automatic devices, actuators or control systems producing, in turn, complex information flows between users, traffic managers, data analysts, traffic modeling scientists, etc. These information flows provide enormous opportunities to improve model development and decision-making. The present work aims to describe how data, coming from diverse ITS sources, can be used to learn and adapt data-driven models for efficiently operating ITS assets, systems and processes; in other words, for data-based models to fully become actionable. Grounded on this described data modeling pipeline for ITS, we define the characteristics, engineering requisites and challenges intrinsic to its three compounding stages, namely, data fusion, adaptive learning and model evaluation. We deliberately generalize model learning to be adaptive, since, in the core of our paper is the firm conviction that most learners will have to adapt to the everchanging phenomenon scenario underlying the majority of ITS applications. Finally, we provide a prospect of current research lines within the Data Science realm that can bring notable advances to data-based ITS modeling, which will eventually bridge the gap towards the practicality and actionability of such models.


Mining User Behaviour from Smartphone data, a literature review

arXiv.org Machine Learning

To study users' travel behaviour and travel time between origin and destination, researchers employ travel surveys. Although there is consensus in the field about the potential, after over ten years of research and field experimentation, Smartphone-based travel surveys still did not take off to a large scale. Here, computer intelligence algorithms take the role that operators have in Traditional Travel Surveys; since we train each algorithm on data, performances rest on the data quality, thus on the ground truth. Inaccurate validations affect negatively: labels, algorithms' training, travel diaries precision, and therefore data validation, within a very critical loop. Interestingly, boundaries are proven burdensome to push even for Machine Learning methods. To support optimal investment decisions for practitioners, we expose the drivers they should consider when assessing what they need against what they get. This paper highlights and examines the critical aspects of the underlying research and provides some recommendations: (i) from the device perspective, on the main physical limitations; (ii) from the application perspective, the methodological framework deployed for the automatic generation of travel diaries; (iii)from the ground truth perspective, the relationship between user interaction, methods, and data.


Photoshop for iPad road map details much-needed early 2020 updates

#artificialintelligence

With Photoshop on iPad's current tools, having to fix the edges of this extremely basic selection is annoyingly tedious. Adobe plans to rectify that in the first half of 2020. When Adobe launched the long-awaited iPad version of Photoshop in October, it looked like we had a little more waiting to do before it could really be used as intended; among other things, its masking and retouching tools are underpowered, especially given that Adobe's planning to charge $10 a month for a stand-alone version (at the moment, the cheapest way to get it is as part of the $10 a month Photography plan). So today Adobe has given us more of an idea as to what its priorities are by telling when we can expect some of those essential tools to appear. By the end of this year, it will have incorporated the Select Subject tool that just debuted in the desktop version; Select Subject uses AI to guess what the main subject in a photo is and select it.


A Convolutional Neural Network for User Identification based on Motion Sensors

arXiv.org Machine Learning

These mechanisms are susceptible to guessing (or spoofing in the case of fingerprint scans) and to side channel attacks [1] such as smudge [2], reflection [3, 4] and video capture attacks [5-7]. On top of this, a fundamental limitation of PINs, passwords, and fingerprint scans is that these mechanisms require explicit user interaction. Due to the world wide adoption of mobile devices and the advancement of technologies, mobile devices are now equipped with multiple sensors such as accelerometers, gyroscopes, magnetometers, among others. The data recorded by these sensors during the interaction of the user with the mobile device can be used as biometric data to identify the user. Indeed, onetime or continuous user identification based on the data collected by the motion sensors of a mobile device is an actively studied task [8-22], that emerged after the integration of motion sensors into commonly used mobile devices. In this paper, we propose a novel deep learning approach that can identify the user from a single tap on smart-phone's touchscreen, using the discrete signals recorded by the accelerometer and the gyroscope during the tap gesture. By minimizing the user's interaction during verification and by removing the requirement to explicitly insert PINs, graphical passwords or scan fingerprints, we eliminate many of the enumerated attacks. Our approach is based on transforming the discrete 3-axis signals from the accelerometer and the gyroscope into a gray-scale image representation that can be provided as input for deep convolutional neural networks (CNNs) [23,24]. Our image representation is based on repeating the six one-dimensional (1D) signals using a modified version of de Brujin sequences [25], such that the 3 3 convolutional filters from the first layer of the CNN get to "see" every possible tuple of three 1D signals in their receptive field.


Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing

arXiv.org Artificial Intelligence

With the breakthroughs in deep learning, the recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. More recently, with the proliferation of mobile computing and Internet-of-Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions Bytes of data at the network edge. Driving by this trend, there is an urgent need to push the AI frontiers to the network edge so as to fully unleash the potential of the edge big data. To meet this demand, edge computing, an emerging paradigm that pushes computing tasks and services from the network core to the network edge, has been widely recognized as a promising solution. The resulted new inter-discipline, edge AI or edge intelligence, is beginning to receive a tremendous amount of interest. However, research on edge intelligence is still in its infancy stage, and a dedicated venue for exchanging the recent advances of edge intelligence is highly desired by both the computer system and artificial intelligence communities. To this end, we conduct a comprehensive survey of the recent research efforts on edge intelligence. Specifically, we first review the background and motivation for artificial intelligence running at the network edge. We then provide an overview of the overarching architectures, frameworks and emerging key technologies for deep learning model towards training/inference at the network edge. Finally, we discuss future research opportunities on edge intelligence. We believe that this survey will elicit escalating attentions, stimulate fruitful discussions and inspire further research ideas on edge intelligence.


Low-Power Computer Vision: Status, Challenges, Opportunities

arXiv.org Artificial Intelligence

Computer vision has achieved impressive progress in recent years. Meanwhile, mobile phones have become the primary computing platforms for millions of people. In addition to mobile phones, many autonomous systems rely on visual data for making decisions and some of these systems have limited energy (such as unmanned aerial vehicles also called drones and mobile robots). These systems rely on batteries and energy efficiency is critical. This article serves two main purposes: (1) Examine the state-of-the-art for low-power solutions to detect objects in images. Since 2015, the IEEE Annual International Low-Power Image Recognition Challenge (LPIRC) has been held to identify the most energy-efficient computer vision solutions. This article summarizes 2018 winners' solutions. (2) Suggest directions for research as well as opportunities for low-power computer vision.


Age of AI -- The Paradigm Shift to Natural UI

#artificialintelligence

I always loved products and technology. But ever since I was a child, I was especially fascinated by these big inventions, powered by transformative technological revolution that changed - everything! So I felt extremely lucky, when about 20 years ago, at the beginning of my career, I was just in time for one of these revolutions: when the Internet happened. Through the connected PC, the world we lived in has been transformed from a "physical world" -- where we used to go to places like libraries, and use things like encyclopedias and paper maps, to a "digital world" -- where we consume digital information and services from the convenience of our home. What was especially amazing, was the rate and scale of this transformation.