Goto

Collaborating Authors

Sensor Networks


MILP, pseudo-boolean, and OMT solvers for optimal fault-tolerant placements of relay nodes in mission critical wireless networks

arXiv.org Artificial Intelligence

In critical infrastructures like airports, much care has to be devoted in protecting radio communication networks from external electromagnetic interference. Protection of such mission-critical radio communication networks is usually tackled by exploiting radiogoniometers: at least three suitably deployed radiogoniometers, and a gateway gathering information from them, permit to monitor and localise sources of electromagnetic emissions that are not supposed to be present in the monitored area. Typically, radiogoniometers are connected to the gateway through relay nodes. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance. In this paper, we address the problem of computing a deployment for relay nodes that minimises the relay node network cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (fault-tolerance). We show that, by means of a computation-intensive pre-processing on a HPC infrastructure, the above optimisation problem can be encoded as a 0/1 Linear Program, becoming suitable to be approached with standard Artificial Intelligence reasoners like MILP, PB-SAT, and SMT/OMT solvers. Our problem formulation enables us to present experimental results comparing the performance of these three solving technologies on a real case study of a relay node network deployment in areas of the Leonardo da Vinci Airport in Rome, Italy.


A discrete optimisation approach for target path planning whilst evading sensors

arXiv.org Artificial Intelligence

In this paper we deal with a practical problem that arises in military situations. The problem is to plan a path for one (or more) agents to reach a target without being detected by enemy sensors. Agents are not passive, rather they can (within limits) initiate actions which aid evasion, namely knockout (completely disable sensors) and confusion (reduce sensor detection probabilities). Agent actions are path dependent and time limited. Here by path dependent we mean that an agent needs to be sufficiently close to a sensor to knock it out. By time limited we mean that a limit is imposed on how long a sensor is knocked out or confused before it reverts back to its original operating state. The approach adopted breaks the continuous space in which agents move into a discrete space. This enables the problem to be represented (formulated) mathematically as a zero-one integer program with linear constraints. The advantage of representing the problem in this manner is that powerful commercial software optimisation packages exist to solve the problem to proven global optimality. Computational results are presented for a number of randomly generated test problems.


Privacy Assessment of Federated Learning using Private Personalized Layers

arXiv.org Artificial Intelligence

Federated Learning (FL) is a collaborative scheme to train a learning model across multiple participants without sharing data. While FL is a clear step forward towards enforcing users' privacy, different inference attacks have been developed. In this paper, we quantify the utility and privacy trade-off of a FL scheme using private personalized layers. While this scheme has been proposed as local adaptation to improve the accuracy of the model through local personalization, it has also the advantage to minimize the information about the model exchanged with the server. However, the privacy of such a scheme has never been quantified. Our evaluations using motion sensor dataset show that personalized layers speed up the convergence of the model and slightly improve the accuracy for all users compared to a standard FL scheme while better preventing both attribute and membership inferences compared to a FL scheme using local differential privacy.


GRAVITAS: Graphical Reticulated Attack Vectors for Internet-of-Things Aggregate Security

arXiv.org Artificial Intelligence

Abstract--Internet-of-Things (IoT) and cyber-physical systems (CPSs) may consist of thousands of devices connected in a complex network topology. The diversity and complexity of these components present an enormous attack surface, allowing an adversary to exploit security vulnerabilities of different devices to execute a potent attack. Though significant efforts have been made to improve the security of individual devices in these systems, little attention has been paid to security at the aggregate level. In this article, we describe a comprehensive risk management system, called GRAVITAS, for IoT/CPS that can identify undiscovered attack vectors and optimize the placement of defenses within the system for optimal performance and cost. While existing risk management systems consider only known attacks, our model employs a machine learning approach to extrapolate undiscovered exploits, enabling us to identify attacks overlooked by manual penetration testing (pen-testing). The model is flexible enough to analyze practically any IoT/CPS and provide the system administrator with a concrete list of suggested defenses that can reduce system vulnerability at optimal cost. GRAVITAS can be employed by governments, companies, and system administrators to design secure IoT/CPS at scale, providing a quantitative measure of security and efficiency in a world where IoT/CPS devices will soon be ubiquitous. Cyber-physical systems (CPSs) employ sensor designed for machine learning (ML) is also accelerating data to monitor the physical environment and create realworld the adoption of large-scale IoT/CPS [10]. These broad categories include However, many industry experts and leading political systems ranging from a single Bluetooth-enabled figures argue that the widespread adoption of IoT systems smartwatch to a smart city containing millions of devices.


Scientists develop robotic third thumb controlled by sensors on the big toes

Daily Mail - Science & tech

Scientists have developed a robotic 3D-printed'third thumb' that's controlled using pressure sensors on the underside of the big toes. The thumb, created by a researcher at University College London (UCL), is worn on the side of the hand opposite the actual thumb, near the little finger. In trials, researchers found the human brain can adapt to the use of an extra thumb, but that it may alter the relationship between the brain and the biological hand. Volunteers who were fitted with the third thumb effectively carried out dexterous tasks, like building a tower of blocks, with one hand, researchers found. Having a third thumb could let people carry more objects than usual, hold and open a bottle of soft drink with one hand, or even become a maestro on the guitar.


The Synergy of Complex Event Processing and Tiny Machine Learning in Industrial IoT

arXiv.org Artificial Intelligence

Focusing on comprehensive networking, big data, and artificial intelligence, the Industrial Internet-of-Things (IIoT) facilitates efficiency and robustness in factory operations. Various sensors and field devices play a central role, as they generate a vast amount of real-time data that can provide insights into manufacturing. The synergy of complex event processing (CEP) and machine learning (ML) has been developed actively in the last years in IIoT to identify patterns in heterogeneous data streams and fuse raw data into tangible facts. In a traditional compute-centric paradigm, the raw field data are continuously sent to the cloud and processed centrally. As IIoT devices become increasingly pervasive and ubiquitous, concerns are raised since transmitting such amount of data is energy-intensive, vulnerable to be intercepted, and subjected to high latency. The data-centric paradigm can essentially solve these problems by empowering IIoT to perform decentralized on-device ML and CEP, keeping data primarily on edge devices and minimizing communications. However, this is no mean feat because most IIoT edge devices are designed to be computationally constrained with low power consumption. This paper proposes a framework that exploits ML and CEP's synergy at the edge in distributed sensor networks. By leveraging tiny ML and micro CEP, we shift the computation from the cloud to the power-constrained IIoT devices and allow users to adapt the on-device ML model and the CEP reasoning logic flexibly on the fly without requiring to reupload the whole program. Lastly, we evaluate the proposed solution and show its effectiveness and feasibility using an industrial use case of machine safety monitoring.


Applications of Artificial Intelligence to aid detection of dementia: a narrative review on current capabilities and future directions

arXiv.org Artificial Intelligence

With populations ageing, the number of people with dementia worldwide is expected to triple to 152 million by 2050. Seventy percent of cases are due to Alzheimer's disease (AD) pathology and there is a 10-20 year 'pre-clinical' period before significant cognitive decline occurs. We urgently need, cost effective, objective methods to detect AD, and other dementias, at an early stage. Risk factor modification could prevent 40% of cases and drug trials would have greater chances of success if participants are recruited at an earlier stage. Currently, detection of dementia is largely by pen and paper cognitive tests but these are time consuming and insensitive to pre-clinical phases. Specialist brain scans and body fluid biomarkers can detect the earliest stages of dementia but are too invasive or expensive for widespread use. With the advancement of technology, Artificial Intelligence (AI) shows promising results in assisting with detection of early-stage dementia. Existing AI-aided methods and potential future research directions are reviewed and discussed.


IDMT-Traffic: An Open Benchmark Dataset for Acoustic Traffic Monitoring Research

arXiv.org Artificial Intelligence

In many urban areas, traffic load and noise pollution are constantly increasing. Automated systems for traffic monitoring are promising countermeasures, which allow to systematically quantify and predict local traffic flow in order to to support municipal traffic planning decisions. In this paper, we present a novel open benchmark dataset, containing 2.5 hours of stereo audio recordings of 4718 vehicle passing events captured with both high-quality sE8 and medium-quality MEMS microphones. This dataset is well suited to evaluate the use-case of deploying audio classification algorithms to embedded sensor devices with restricted microphone quality and hardware processing power. In addition, this paper provides a detailed review of recent acoustic traffic monitoring (ATM) algorithms as well as the results of two benchmark experiments on vehicle type classification and direction of movement estimation using four state-of-the-art convolutional neural network architectures.


Winning The AI-Enabled War-at-Sea

#artificialintelligence

DARPA's Ocean of Things (OoT) program aims to achieve maritime situational awareness over large ocean areas through deploying thousands of small, low-cost floats that form a distributed sensor network. Each smart float will have a suite of commercially available sensors to collect environmental and activity data; the later function involves automatically detecting, tracking and identifying nearby ships and – potentially – close aircraft traffic. The floats use edge processing with detection algorithms and then transmit the semi-processed data periodically via the Iridium satellite constellation to a cloud network for on-shore storage. AI machine learning then combs through this sparse data in real time to uncover hidden insights. The floats are environmentally friendly, have a life of around a year and in buys of 50,000 have a unit cost of about US$500 each.


Anchor Nodes Positioning for Self-localization in Wireless Sensor Networks using Belief Propagation and Evolutionary Algorithms

arXiv.org Artificial Intelligence

Locating each node in a wireless sensor network is essential for starting the monitoring job and sending information about the area. One method that has been used in hard and inaccessible environments is randomly scattering each node in the area. In order to reduce the cost of using GPS at each node, some nodes should be equipped with GPS (anchors), Then using the belief propagation algorithm, locate other nodes. The number of anchor nodes must be reduced since they are expensive. Furthermore, the location of these nodes affects the algorithm's performance. Using multi-objective optimization, an algorithm is introduced in this paper that minimizes the estimated location error and the number of anchor nodes. According to simulation results, This algorithm proposes a set of solutions with less energy consumption and less error than similar algorithms.