Donta, Praveen Kumar
Optimizing Multi-DNN Inference on Mobile Devices through Heterogeneous Processor Co-Execution
Gao, Yunquan, Zhang, Zhiguo, Donta, Praveen Kumar, Dehury, Chinmaya Kumar, Wang, Xiujun, Niyato, Dusit, Zhang, Qiyang
Abstract--Deep Neural Networks (DNNs) are increasingly deployed across diverse industries, driving a growing demand to enable their capabilities on mobile devices. However, existing mobile inference frameworks are often rely on a single processor to handle each model's inference, limiting hardware utilization and leading to suboptimal performance and energy efficiency . Expanding DNNs accessibility on mobile platforms requires more adaptive and resource-efficient solutions to meet increasing computational demands without compromising device functionality . Nevertheless, parallel inference of multiple DNNs on heterogeneous processors remains a significant challenge. Several works have explored partitioning DNN operations into subgraphs to enable parallel execution across heterogeneous processors. However, these approaches typically generate excessive subgraphs based solely on hardware compatibility, increasing scheduling complexity and memory management overhead. T o address these limitations, we propose an Advanced Multi-DNN Model Scheduling (ADMS) strategy that optimizes multi-DNN inference across heterogeneous processors on mobile devices. ADMS constructs an optimal subgraph partitioning strategy offline, considering both hardware support of operations and scheduling granularity, while employing a processor-state-aware scheduling algorithm that dynamically balances workloads based on real-time operational conditions. This ensures efficient workload distribution and maximizes the utilization of available processors. Experimental results show that, compared to vanilla inference frameworks, ADMS reduced multi-DNN inference latency by 4.04 T o reduce interaction latency and lower server-side computing costs, an increasing number of applications are shifting inference tasks to mobile devices. In many real-world scenarios, multiple independent or related DNN models run concurrently on mobile devices. For instance, in the smart agriculture scenario, farmers capture video frames using smartphone camera and perform real-time parallel inference with multiple DNN models. These models include crop identification [5], pest and disease detection [6], plant health assessment [7], and soil quality analysis [8]. Gao, X. Wang are with School of Computer Science and T echnology, Anhui Engineering Research Center for Intelligent Applications and Security of Industrial Internet, Anhui University of T echnology, Ma'anshan, Anhui, 243032, China.
Benchmarking Dynamic SLO Compliance in Distributed Computing Continuum Systems
Lapkovskis, Alfreds, Sedlak, Boris, Magnรบsson, Sindri, Dustdar, Schahram, Donta, Praveen Kumar
Ensuring Service Level Objectives (SLOs) in large-scale architectures, such as Distributed Computing Continuum Systems (DCCS), is challenging due to their heterogeneous nature and varying service requirements across different devices and applications. Additionally, unpredictable workloads and resource limitations lead to fluctuating performance and violated SLOs. To improve SLO compliance in DCCS, one possibility is to apply machine learning; however, the design choices are often left to the developer. To that extent, we provide a benchmark of Active Inference -- an emerging method from neuroscience -- against three established reinforcement learning algorithms (Deep Q-Network, Advantage Actor-Critic, and Proximal Policy Optimization). We consider a realistic DCCS use case: an edge device running a video conferencing application alongside a WebSocket server streaming videos. Using one of the respective algorithms, we continuously monitor key performance metrics, such as latency and bandwidth usage, to dynamically adjust parameters -- including the number of streams, frame rate, and resolution -- to optimize service quality and user experience. To test algorithms' adaptability to constant system changes, we simulate dynamically changing SLOs and both instant and gradual data-shift scenarios, such as network bandwidth limitations and fluctuating device thermal states. Although the evaluated algorithms all showed advantages and limitations, our findings demonstrate that Active Inference is a promising approach for ensuring SLO compliance in DCCS, offering lower memory usage, stable CPU utilization, and fast convergence.
Adaptive Stream Processing on Edge Devices through Active Inference
Sedlak, Boris, Pujol, Victor Casamayor, Morichetta, Andrea, Donta, Praveen Kumar, Dustdar, Schahram
The current scenario of IoT is witnessing a constant increase on the volume of data, which is generated in constant stream, calling for novel architectural and logical solutions for processing it. Moving the data handling towards the edge of the computing spectrum guarantees better distribution of load and, in principle, lower latency and better privacy. However, managing such a structure is complex, especially when requirements, also referred to Service Level Objectives (SLOs), specified by applications' owners and infrastructure managers need to be ensured. Despite the rich number of proposals of Machine Learning (ML) based management solutions, researchers and practitioners yet struggle to guarantee long-term prediction and control, and accurate troubleshooting. Therefore, we present a novel ML paradigm based on Active Inference (AIF) -- a concept from neuroscience that describes how the brain constantly predicts and evaluates sensory information to decrease long-term surprise. We implement it and evaluate it in a heterogeneous real stream processing use case, where an AIF-based agent continuously optimizes the fulfillment of three SLOs for three autonomous driving services running on multiple devices. The agent used causal knowledge to gradually develop an understanding of how its actions are related to requirements fulfillment, and which configurations to favor. Through this approach, our agent requires up to thirty iterations to converge to the optimal solution, showing the capability of offering accurate results in a short amount of time. Furthermore, thanks to AIF and its causal structures, our method guarantees full transparency on the decision making, making the interpretation of the results and the troubleshooting effortless.
Follow-Me AI: Energy-Efficient User Interaction with Smart Environments
Saleh, Alaa, Donta, Praveen Kumar, Morabito, Roberto, Motlagh, Naser Hossein, Lovรฉn, Lauri
This article introduces Follow-Me AI, a concept designed to enhance user interactions with smart environments, optimize energy use, and provide better control over data captured by these environments. Through AI agents that accompany users, Follow-Me AI negotiates data management based on user consent, aligns environmental controls as well as user communication and computes resources available in the environment with user preferences, and predicts user behavior to proactively adjust the smart environment. The manuscript illustrates this concept with a detailed example of Follow-Me AI in a smart campus setting, detailing the interactions with the building's management system for optimal comfort and efficiency. Finally, this article looks into the challenges and opportunities related to Follow-Me AI.
Distributed AI in Zero-touch Provisioning for Edge Networks: Challenges and Research Directions
Hazra, Abhishek, Morichetta, Andrea, Murturi, Ilir, Lovรฉn, Lauri, Dehury, Chinmaya Kumar, Pujol, Victor Casamayor, Donta, Praveen Kumar, Dustdar, Schahram
Zero-touch network is anticipated to inaugurate the generation of intelligent and highly flexible resource provisioning strategies where multiple service providers collaboratively offer computation and storage resources. This transformation presents substantial challenges to network administration and service providers regarding sustainability and scalability. This article combines Distributed Artificial Intelligence (DAI) with Zero-touch Provisioning (ZTP) for edge networks. This combination helps to manage network devices seamlessly and intelligently by minimizing human intervention. In addition, several advantages are also highlighted that come with incorporating Distributed AI into ZTP in the context of edge networks. Further, we draw potential research directions to foster novel studies in this field and overcome the current limitations.
CommunityAI: Towards Community-based Federated Learning
Murturi, Ilir, Donta, Praveen Kumar, Dustdar, Schahram
Federated Learning (FL) has emerged as a promising paradigm to train machine learning models collaboratively while preserving data privacy. However, its widespread adoption faces several challenges, including scalability, heterogeneous data and devices, resource constraints, and security concerns. Despite its promise, FL has not been specifically adapted for community domains, primarily due to the wide-ranging differences in data types and context, devices and operational conditions, environmental factors, and stakeholders. In response to these challenges, we present a novel framework for Community-based Federated Learning called CommunityAI. CommunityAI enables participants to be organized into communities based on their shared interests, expertise, or data characteristics. Community participants collectively contribute to training and refining learning models while maintaining data and participant privacy within their respective groups. Within this paper, we discuss the conceptual architecture, system requirements, processes, and future challenges that must be solved. Finally, our goal within this paper is to present our vision regarding enabling a collaborative learning process within various communities.
Learning-driven Zero Trust in Distributed Computing Continuum Systems
Murturi, Ilir, Donta, Praveen Kumar, Pujol, Victor Casamayor, Morichetta, Andrea, Dustdar, Schahram
Converging Zero Trust (ZT) with learning techniques can solve various operational and security challenges in Distributed Computing Continuum Systems (DCCS). Implementing centralized ZT architecture is seen as unsuitable for the computing continuum (e.g., computing entities with limited connectivity and visibility, etc.). At the same time, implementing decentralized ZT in the computing continuum requires understanding infrastructure limitations and novel approaches to enhance resource access management decisions. To overcome such challenges, we present a novel learning-driven ZT conceptual architecture designed for DCCS. We aim to enhance ZT architecture service quality by incorporating lightweight learning strategies such as Representation Learning (ReL) and distributing ZT components across the computing continuum. The ReL helps to improve the decision-making process by predicting threats or untrusted requests. Through an illustrative example, we show how the learning process detects and blocks the requests, enhances resource access control, and reduces network and computation overheads. Lastly, we discuss the conceptual architecture, processes, and provide a research agenda.
Equilibrium in the Computing Continuum through Active Inference
Sedlak, Boris, Pujol, Victor Casamayor, Donta, Praveen Kumar, Dustdar, Schahram
Computing Continuum (CC) systems are challenged to ensure the intricate requirements of each computational tier. Given the system's scale, the Service Level Objectives (SLOs) which are expressed as these requirements, must be broken down into smaller parts that can be decentralized. We present our framework for collaborative edge intelligence enabling individual edge devices to (1) develop a causal understanding of how to enforce their SLOs, and (2) transfer knowledge to speed up the onboarding of heterogeneous devices. Through collaboration, they (3) increase the scope of SLO fulfillment. We implemented the framework and evaluated a use case in which a CC system is responsible for ensuring Quality of Service (QoS) and Quality of Experience (QoE) during video streaming. Our results showed that edge devices required only ten training rounds to ensure four SLOs; furthermore, the underlying causal structures were also rationally explainable. The addition of new types of devices can be done a posteriori, the framework allowed them to reuse existing models, even though the device type had been unknown. Finally, rebalancing the load within a device cluster allowed individual edge devices to recover their SLO compliance after a network failure from 22% to 89%.
Active Inference on the Edge: A Design Study
Sedlak, Boris, Pujol, Victor Casamayor, Donta, Praveen Kumar, Dustdar, Schahram
Machine Learning (ML) is a common tool to interpret and predict the behavior of distributed computing systems, e.g., to optimize the task distribution between devices. As more and more data is created by Internet of Things (IoT) devices, data processing and ML training are carried out by edge devices in close proximity. To ensure Quality of Service (QoS) throughout these operations, systems are supervised and dynamically adapted with the help of ML. However, as long as ML models are not retrained, they fail to capture gradual shifts in the variable distribution, leading to an inaccurate view of the system state. Moreover, as the prediction accuracy decreases, the reporting device should actively resolve uncertainties to improve the model's precision. Such a level of self-determination could be provided by Active Inference (ACI) -- a concept from neuroscience that describes how the brain constantly predicts and evaluates sensory information to decrease long-term surprise. We encompassed these concepts in a single action-perception cycle, which we implemented for distributed agents in a smart manufacturing use case. As a result, we showed how our ACI agent was able to quickly and traceably solve an optimization problem while fulfilling QoS requirements.
Designing Reconfigurable Intelligent Systems with Markov Blankets
Sedlak, Boris, Pujol, Victor Casamayor, Donta, Praveen Kumar, Dustdar, Schahram
Compute Continuum (CC) systems comprise a vast number of devices distributed over computational tiers. Evaluating business requirements, i.e., Service Level Objectives (SLOs), requires collecting data from all those devices; if SLOs are violated, devices must be reconfigured to ensure correct operation. If done centrally, this dramatically increases the number of devices and variables that must be considered, while creating an enormous communication overhead. To address this, we (1) introduce a causality filter based on Markov blankets (MB) that limits the number of variables that each device must track, (2) evaluate SLOs decentralized on a device basis, and (3) infer optimal device configuration for fulfilling SLOs. We evaluated our methodology by analyzing video stream transformations and providing device configurations that ensure the Quality of Service (QoS). The devices thus perceived their environment and acted accordingly -- a form of decentralized intelligence.