Goto

Collaborating Authors

Results


How to Curtail Oversensing in the Home

Communications of the ACM

Future homes will employ potentially hundreds of Internet of Things (IoT) devices whose sensors may inadvertently leak sensitive information. A previous Communications Inside Risks column ("The Future of the Internet of Things," Feb. 2017) discusses how the expected scale of the IoT introduces threats that require considerations and mitigations.2 Future homes are an IoT hotspot that will be particularly at risk. Sensitive information such as passwords, identification, and financial transactions are abundant in the home--as are sensor systems such as digital assistants, smartphones, and interactive home appliances that may unintentionally capture this sensitive information. IoT device manufacturers should employ sensor sensor permissioning systems to limit applications access to only sensor data required for operation, reducing the risk that malicious applications may gain sensitive information. For example, a simple notepad application should not have microphone access.


Machine Learning Based Solutions for Security of Internet of Things (IoT): A Survey

arXiv.org Machine Learning

Over the last decade, IoT platforms have been developed into a global giant that grabs every aspect of our daily lives by advancing human life with its unaccountable smart services. Because of easy accessibility and fast-growing demand for smart devices and network, IoT is now facing more security challenges than ever before. There are existing security measures that can be applied to protect IoT. However, traditional techniques are not as efficient with the advancement booms as well as different attack types and their severeness. Thus, a strong-dynamically enhanced and up to date security system is required for next-generation IoT system. A huge technological advancement has been noticed in Machine Learning (ML) which has opened many possible research windows to address ongoing and future challenges in IoT. In order to detect attacks and identify abnormal behaviors of smart devices and networks, ML is being utilized as a powerful technology to fulfill this purpose. In this survey paper, the architecture of IoT is discussed, following a comprehensive literature review on ML approaches the importance of security of IoT in terms of different types of possible attacks. Moreover, ML-based potential solutions for IoT security has been presented and future challenges are discussed.


pAElla: Edge-AI based Real-Time Malware Detection in Data Centers

arXiv.org Machine Learning

The increasing use of Internet-of-Things (IoT) devices for monitoring a wide spectrum of applications, along with the challenges of "big data" streaming support they often require for data analysis, is nowadays pushing for an increased attention to the emerging edge computing paradigm. In particular, smart approaches to manage and analyze data directly on the network edge, are more and more investigated, and Artificial Intelligence (AI) powered edge computing is envisaged to be a promising direction. In this paper, we focus on Data Centers (DCs) and Supercomputers (SCs), where a new generation of high-resolution monitoring systems is being deployed, opening new opportunities for analysis like anomaly detection and security, but introducing new challenges for handling the vast amount of data it produces. In detail, we report on a novel lightweight and scalable approach to increase the security of DCs/SCs, that involves AI-powered edge computing on high-resolution power consumption. The method -- called pAElla -- targets real-time Malware Detection (MD), it runs on an out-of-band IoT-based monitoring system for DCs/SCs, and involves Power Spectral Density of power measurements, along with AutoEncoders. Results are promising, with an F1-score close to 1, and a False Alarm and Malware Miss rate close to 0%. We compare our method with State-of-the-Art MD techniques and show that, in the context of DCs/SCs, pAElla can cover a wider range of malware, significantly outperforming SoA approaches in terms of accuracy. Moreover, we propose a methodology for online training suitable for DCs/SCs in production, and release open dataset and code.


A Survey on Edge Intelligence

arXiv.org Artificial Intelligence

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.


Health State Estimation

arXiv.org Artificial Intelligence

Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.


Conceptualizing the 2020's: The Decade of the Internet of Things - insideBIGDATA

#artificialintelligence

Several lofty predictions about the number of connected devices in the IoT begin this year; many developments directly impacting its adoption rates will flourish in the coming 10 years, rendering it the premier expression of data management. A number of trends in edge computing, 5G, cyber security, Artificial Intelligence, and digital twins will significantly alter what the IoT means to enterprises. Opportunities for monetization will proliferate as, perhaps, will the potential for misuse. Exactly which of these trajectories will dominate remains to be seen. "The creators of IoT apps and the whole platform, including hardware and software, are coming up with ways we never envisioned for what IoT systems could do," observed Cybera President Cliff Duffey.


Federated Learning for Resource-Constrained IoT Devices: Panoramas and State-of-the-art

arXiv.org Machine Learning

Nowadays, devices are equipped with advanced sensors with higher processing/computing capabilities. Further, widespread Internet availability enables communication among sensing devices. As a result, vast amounts of data are generated on edge devices to drive Internet-of-Things (IoT), crowdsourcing, and other emerging technologies. The collected extensive data can be pre-processed, scaled, classified, and finally, used for predicting future events using machine learning (ML) methods. In traditional ML approaches, data is sent to and processed in a central server, which encounters communication overhead, processing delay, privacy leakage, and security issues. To overcome these challenges, each client can be trained locally based on its available data and by learning from the global model. This decentralized learning structure is referred to as Federated Learning (FL). However, in large-scale networks, there may be clients with varying computational resource capabilities. This may lead to implementation and scalability challenges for FL techniques. In this paper, we first introduce some recently implemented real-life applications of FL. We then emphasize on the core challenges of implementing the FL algorithms from the perspective of resource limitations (e.g., memory, bandwidth, and energy budget) of client clients. We finally discuss open issues associated with FL and highlight future directions in the FL area concerning resource-constrained devices.


Oliver Letwin, the unlikely merchant of technological doom

The Guardian

Oliver Letwin's strange and somewhat alarming new book begins at midnight on Thursday 31 December 2037. In Swindon – stay with me! – a man called Aameen Patel is working the graveyard shift at Highways England's traffic HQ when his computer screen goes blank, and the room is plunged into darkness. He tries to report these things to his superiors, but can get no signal on his mobile. Looking at the motorway from the viewing window by his desk, he observes, not an orderly stream of traffic, but a dramatic pile-up of crashed cars and lorries – at which point he realises something is seriously amiss. In the Britain of 2037, everything, or almost everything, is controlled by 7G wireless technology, from the national grid to the traffic (not only are cars driverless; a vehicle cannot even join a motorway without logging into an "on-route guidance system"). There is, then, only one possible explanation: the entire 7G network must have gone down. It sounds like I'm describing a novel – and it's true that Aameen Patel will soon be joined by another fictional creation in the form of Bill Donoghue, who works at the Bank of England, and whose job it will be to tell the prime minister that the country is about to pay a heavy price for its cashless economy, given that even essential purchases will not be possible until the network is back up (Bill's mother-in-law is also one of thousands of vulnerable people whose carers will soon be unable to get to them, the batteries in their electric cars having gone flat).


Artificial Intelligence for Digital Agriculture at Scale: Techniques, Policies, and Challenges

arXiv.org Artificial Intelligence

Digital agriculture has the promise to transform agricultural throughput. It can do this by applying data science and engineering for mapping input factors to crop throughput, while bounding the available resources. In addition, as the data volumes and varieties increase with the increase in sensor deployment in agricultural fields, data engineering techniques will also be instrumental in collection of distributed data as well as distributed processing of the data. These have to be done such that the latency requirements of the end users and applications are satisfied. Understanding how farm technology and big data can improve farm productivity can significantly increase the world's food production by 2050 in the face of constrained arable land and with the water levels receding. While much has been written about digital agriculture's potential, little is known about the economic costs and benefits of these emergent systems. In particular, the on-farm decision making processes, both in terms of adoption and optimal implementation, have not been adequately addressed. For example, if some algorithm needs data from multiple data owners to be pooled together, that raises the question of data ownership. This paper is the first one to bring together the important questions that will guide the end-to-end pipeline for the evolution of a new generation of digital agricultural solutions, driving the next revolution in agriculture and sustainability under one umbrella.


FBI warns hackers can use smart home devices to 'do a virtual drive-by of your digital life'

Daily Mail - Science & tech

Smart home devices are designed to make our lives easier, but they also make it easier for hackers to infiltrate our lives. The FBI has sent out a warning that'hackers can use those innocent devices to do a virtual drive-by of your digital life.' The US intelligence agency urges users to regularly change passwords, check for firmware updates and never have two devices on the same network. Digital assistants, smart watches, fitness trackers, home security devices, thermostats, refrigerators, and even light bulbs are all on the list of devices that can be infiltrated by cybercriminals. And if these devices, among other smart home technology, are not properly protected, they can be used by hackers to'do a virtual drive-by of your digital life.' Samsung are developing an interactive kitchen that includes a fridge, oven and TV.