Goto

Collaborating Authors

 ecs


Sound Value Iteration for Simple Stochastic Games

Azeem, Muqsit, Kretinsky, Jan, Weininger, Maximilian

arXiv.org Artificial Intelligence

V alue iteration (VI) [4] is the practically most used method for reliable analysis of probabilistic systems, in particular Markov decision processes (MDPs) [21] and stochastic games (SGs) [8]. It is used in the state-of-the-art model checkers such as Prism [18] and Storm [11] as the default method due to its better practical scalability, compared to strategy iteration or linear/quadratic programming [14, 19]. The price to pay are issues with precision. Firstly, while other methods yield precise results in theory (omitting floating-point issues), VI converges to the exact result only in the limit. Secondly, the precision of the intermediate iterations was until recently an open question. Given the importance of reliable precision in verification, many recent works focused on modifying VI so that the imprecision can be bounded, yielding a stopping criterion. Consequently, (i) the computed result is reliable, and (ii) the procedure can even terminate earlier whenever the desired precision is achieved.


Quantum Learning and Estimation for Distribution Networks and Energy Communities Coordination

Zhuang, Yingrui, Cheng, Lin, Cao, Yuji, Li, Tongxin, Qi, Ning, Xu, Yan, Chen, Yue

arXiv.org Artificial Intelligence

Price signals from distribution networks (DNs) guide energy communities (ECs) to adjust energy usage, enabling effective coordination for reliable power system operation. However, this coordination faces significant challenges due to the limited availability of information (i.e., only the aggregated energy usage of ECs is available to DNs), and the high computational burden of accounting for uncertainties and the associated risks through numerous scenarios. To address these challenges, we propose a quantum learning and estimation approach to enhance coordination between DNs and ECs. Specifically, leveraging advanced quantum properties such as quantum superposition and entanglement, we develop a hybrid quantum temporal convolutional network-long short-term memory (Q-TCN-LSTM) model to establish an end-to-end mapping between ECs' responses and the price incentives from DNs. Moreover, we develop a quantum estimation method based on quantum amplitude estimation (QAE) and two phase-rotation circuits to significantly accelerate the optimization process under numerous uncertainty scenarios. Numerical experiments demonstrate that, compared to classical neural networks, the proposed Q-TCN-LSTM model improves the mapping accuracy by 69.2% while reducing the model size by 99.75% and the computation time by 93.9%. Compared to classical Monte Carlo simulation, QAE achieves comparable accuracy with a dramatic reduction in computational time (up to 99.99%) and requires significantly fewer computational resources.


Maximizing the Promptness of Metaverse Systems using Edge Computing by Deep Reinforcement Learning

Thi-Thanh, Tam Ninh, Van Chien, Trinh, Tran, Hung, Son, Nguyen Hoai, Vo, Van Nhan

arXiv.org Artificial Intelligence

--Metaverse and Digital Twin (DT) have attracted much academic and industrial attraction to approach the future digital world. This paper introduces the advantages of deep reinforcement learning (DRL) in assisting Metaverse system-based Digital Twin. In this system, we assume that it includes several Metaverse User devices collecting data from the real world to transfer it into the virtual world, a Metaverse Virtual Access Point (MV AP) undertaking the processing of data, and an edge computing server that receives the offloading data from the MV AP . The proposed model works under a dynamic environment with various parameters changing over time. The experiment results show that our proposed DRL algorithm is suitable for offloading tasks to ensure the promptness of DT in a dynamic environment. I. INTRODUCTION In machine learning, reinforcement learning (RL) is an approach where an agent learns to make optimal decisions by exploring and interacting with a specific environment.


Lightweight Trustworthy Distributed Clustering

Li, Hongyang, Wu, Caesar, Chadli, Mohammed, Mammar, Said, Bouvry, Pascal

arXiv.org Artificial Intelligence

Ensuring data trustworthiness within individual edge node s while facilitating collaborative data processing poses a critical challenge in edge computing system s (ECS), particularly in resource-constrained scenarios such as autonomous systems sensor networks, indu strial IoT, and smart cities. This paper presents a lightweight, fully distributed k -means clustering algorithm specifically adapted for edge e nvi-ronments, leveraging a distributed averaging approach wit h additive secret sharing, a secure multiparty computation technique, during the cluster center update ph ase to ensure the accuracy and trustworthiness of data across nodes. Edge computing, a paradigm emerging from distributed compu ting, emphasizes processing data at or near its source to minimize latency and reduce band width consumption [1]-[3]. The rapid advancements in edge computing technologies, includ ing algorithms for decentralized and efficient data processing, have significantly accelerated t he deployment of distributed sensor networks. Two key properties of ECS are crucial in large-scale deploym ents.


Load Forecasting for Households and Energy Communities: Are Deep Learning Models Worth the Effort?

Moosbrugger, Lukas, Seiler, Valentin, Wohlgenannt, Philipp, Hegenbart, Sebastian, Ristov, Sashko, Kepplinger, Peter

arXiv.org Artificial Intelligence

Accurate load forecasting is crucial for predictive control in many energy domain applications, with significant economic and ecological implications. To address these implications, this study provides an extensive benchmark of state-of-the-art deep learning models for short-term load forecasting in energy communities. Namely, LSTM, xLSTM, and Transformers are compared with benchmarks such as KNNs, synthetic load models, and persistence forecasting models. This comparison considers different scales of aggregation (e.g., number of household loads) and varying training data availability (e.g., training data time spans). Further, the impact of transfer learning from synthetic (standard) load profiles and the deep learning model size (i.e., parameter count) is investigated in terms of forecasting error. Implementations are publicly available and other researchers are encouraged to benchmark models using this framework. Additionally, a comprehensive case study, comprising an energy community of 50 households and a battery storage demonstrates the beneficial financial implications of accurate predictions. Key findings of this research include: (1) Simple persistence benchmarks outperform deep learning models for short-term load forecasting when the available training data is limited to six months or less; (2) Pretraining with publicly available synthetic load profiles improves the normalized Mean Absolute Error (nMAE) by an average of 1.28%pt during the first nine months of training data; (3) Increased aggregation significantly enhances the performance of deep learning models relative to persistence benchmarks; (4) Improved load forecasting, with an nMAE reduction of 1.1%pt, translates to an economic benefit of approximately 600EUR per year in an energy community comprising 50 households.


A Distributional Evaluation of Generative Image Models

Tam, Edric, Engelhardt, Barbara E

arXiv.org Machine Learning

Generative models are ubiquitous in modern artificial intelligence (AI) applications. Recent advances have led to a variety of generative modeling approaches that are capable of synthesizing highly realistic samples. Despite these developments, evaluating the distributional match between the synthetic samples and the target distribution in a statistically principled way remains a core challenge. We focus on evaluating image generative models, where studies often treat human evaluation as the gold standard. Commonly adopted metrics, such as the Fr\'echet Inception Distance (FID), do not sufficiently capture the differences between the learned and target distributions, because the assumption of normality ignores differences in the tails. We propose the Embedded Characteristic Score (ECS), a comprehensive metric for evaluating the distributional match between the learned and target sample distributions, and explore its connection with moments and tail behavior. We derive natural properties of ECS and show its practical use via simulations and an empirical study.


Conditional Enzyme Generation Using Protein Language Models with Adapters

Yang, Jason, Bhatnagar, Aadyot, Ruffolo, Jeffrey A., Madani, Ali

arXiv.org Artificial Intelligence

The conditional generation of proteins with desired functions and/or properties is a key goal for generative models. Existing methods based on prompting of language models can generate proteins conditioned on a target functionality, such as a desired enzyme family. However, these methods are limited to simple, tokenized conditioning and have not been shown to generalize to unseen functions. In this study, we propose ProCALM (Protein Conditionally Adapted Language Model), an approach for the conditional generation of proteins using adapters to protein language models. Our specific implementation of ProCALM involves finetuning ProGen2 to incorporate conditioning representations of enzyme function and taxonomy. ProCALM matches existing methods at conditionally generating sequences from target enzyme families. Impressively, it can also generate within the joint distribution of enzymatic function and taxonomy, and it can generalize to rare and unseen enzyme families and taxonomies. Overall, ProCALM is a flexible and computationally efficient approach, and we expect that it can be extended to a wide range of generative language models. Proteins, sequences of amino acids, are important molecules in all living organisms and have many industrial applications. Protein sequences can be modified or designed to have desired function(s) or optimized properties so that they are more useful for applications ranging from greener chemical synthesis to gene-editing for disease treatment (Buller et al., 2023).


Comparative Analysis of AWS Model Deployment Services

Bagai, Rahul

arXiv.org Artificial Intelligence

Amazon Web Services (AWS) offers three important Model Deployment Services for model developers: SageMaker, Lambda, and Elastic Container Service (ECS). These services have critical advantages and disadvantages, influencing model developer's adoption decisions. This comparative analysis reviews the merits and drawbacks of these services. This analysis found that Lambda AWS service leads in efficiency, autoscaling aspects, and integration during model development. However, ECS was found to be outstanding in terms of flexibility, scalability, and infrastructure control; conversely, ECS is better suited when it comes to managing complex container environments during model development, as well as addressing budget concerns -- it is, therefore, the preferred option for model developers whose objective is to achieve complete freedom and framework flexibility with horizontal scaling. ECS is better suited to ensuring performance requirements align with project goals and constraints. The AWS service selection process considered factors that include but are not limited to load balance and cost-effectiveness. ECS is a better choice when model development begins from the abstract. It offers unique benefits, such as the ability to scale horizontally and vertically, making it the best preferable tool for model deployment.


S+t-SNE -- Bringing dimensionality reduction to data streams

Vieira, Pedro C., Montrezol, João P., Vieira, João T., Gama, João

arXiv.org Artificial Intelligence

We present S+t-SNE, an adaptation of the t-SNE algorithm designed to handle infinite data streams. The core idea behind S+t-SNE is to update the t-SNE embedding incrementally as new data arrives, ensuring scalability and adaptability to handle streaming scenarios. By selecting the most important points at each step, the algorithm ensures scalability while keeping informative visualisations. Employing a blind method for drift management adjusts the embedding space, facilitating continuous visualisation of evolving data dynamics. Our experimental evaluations demonstrate the effectiveness and efficiency of S+t-SNE. The results highlight its ability to capture patterns in a streaming scenario. We hope our approach offers researchers and practitioners a real-time tool for understanding and interpreting high-dimensional data.


Quantitative Analysis of Molecular Transport in the Extracellular Space Using Physics-Informed Neural Network

Xie, Jiayi, Li, Hongfeng, Cheng, Jin, Cai, Qingrui, Tan, Hanbo, Zu, Lingyun, Qu, Xiaobo, Han, Hongbin

arXiv.org Artificial Intelligence

The brain extracellular space (ECS), an irregular, extremely tortuous nanoscale space located between cells or between cells and blood vessels, is crucial for nerve cell survival. It plays a pivotal role in high-level brain functions such as memory, emotion, and sensation. However, the specific form of molecular transport within the ECS remain elusive. To address this challenge, this paper proposes a novel approach to quantitatively analyze the molecular transport within the ECS by solving an inverse problem derived from the advection-diffusion equation (ADE) using a physics-informed neural network (PINN). PINN provides a streamlined solution to the ADE without the need for intricate mathematical formulations or grid settings. Additionally, the optimization of PINN facilitates the automatic computation of the diffusion coefficient governing long-term molecule transport and the velocity of molecules driven by advection. Consequently, the proposed method allows for the quantitative analysis and identification of the specific pattern of molecular transport within the ECS through the calculation of the Peclet number. Experimental validation on two datasets of magnetic resonance images (MRIs) captured at different time points showcases the effectiveness of the proposed method. Notably, our simulations reveal identical molecular transport patterns between datasets representing rats with tracer injected into the same brain region. These findings highlight the potential of PINN as a promising tool for comprehensively exploring molecular transport within the ECS.