Goto

Collaborating Authors

 Cloud Computing: Overviews


Human Digital Twins in Personalized Healthcare: An Overview and Future Perspectives

arXiv.org Artificial Intelligence

This evolution indicates an expansion from industrial uses into diverse fields, including healthcare [61], [59]. The core functionalities of digital twins include an accurate mirroring of their physical counterparts, capturing all associated processes in a data-driven manner, maintaining a continuous connection that synchronizes with the real-time state of their physical twins, and simulating physical behavior for predictive analysis [85]. In the context of healthcare, a novel extension of this technology manifests in the form of Human Digital Twins (HDTs), designed to provide a comprehensive digital mirror of individual patients. HDTs not only represent physical attributes but also integrate dynamic changes across molecular, physiological, and behavioral dimensions. This advancement is aligned with a shift toward personalized healthcare (PH) paradigms, enabling tailored treatment strategies based on a patient's unique health profile, thereby enhancing preventive, diagnostic, and therapeutic processes in clinical settings [44], [50]. The personalization aspect of HDTs underscores their potential to revolutionize healthcare by facilitating precise and individualized treatment plans that optimize patient outcomes [72]. Although the potential of digital twins in healthcare has garnered much attention, practical applications remain newly developing, with critical literature highlighting that many implementations are still in exploratory stages [59]. Notably, institutions like the IEEE Computer Society and Gartner recognize this technology as a pivotal component in the ongoing evolution of healthcare systems that emphasize both precision and personalization [31], [89].


Deep Reinforcement Learning for Job Scheduling and Resource Management in Cloud Computing: An Algorithm-Level Review

arXiv.org Artificial Intelligence

Cloud computing has revolutionized the provisioning of computing resources, offering scalable, flexible, and on-demand services to meet the diverse requirements of modern applications. At the heart of efficient cloud operations are job scheduling and resource management, which are critical for optimizing system performance and ensuring timely and cost-effective service delivery. However, the dynamic and heterogeneous nature of cloud environments presents significant challenges for these tasks, as workloads and resource availability can fluctuate unpredictably. Traditional approaches, including heuristic and meta-heuristic algorithms, often struggle to adapt to these real-time changes due to their reliance on static models or predefined rules. Deep Reinforcement Learning (DRL) has emerged as a promising solution to these challenges by enabling systems to learn and adapt policies based on continuous observations of the environment, facilitating intelligent and responsive decision-making. This survey provides a comprehensive review of DRL-based algorithms for job scheduling and resource management in cloud computing, analyzing their methodologies, performance metrics, and practical applications. We also highlight emerging trends and future research directions, offering valuable insights into leveraging DRL to advance both job scheduling and resource management in cloud computing.


Cloud Platforms for Developing Generative AI Solutions: A Scoping Review of Tools and Services

arXiv.org Artificial Intelligence

Generative AI is transforming enterprise application development by enabling machines to create content, code, and designs. These models, however, demand substantial computational power and data management. Cloud computing addresses these needs by offering infrastructure to train, deploy, and scale generative AI models. This review examines cloud services for generative AI, focusing on key providers like Amazon Web Services (AWS), Microsoft Azure, Google Cloud, IBM Cloud, Oracle Cloud, and Alibaba Cloud. It compares their strengths, weaknesses, and impact on enterprise growth. We explore the role of high-performance computing (HPC), serverless architectures, edge computing, and storage in supporting generative AI. We also highlight the significance of data management, networking, and AI-specific tools in building and deploying these models. Additionally, the review addresses security concerns, including data privacy, compliance, and AI model protection. It assesses the performance and cost efficiency of various cloud providers and presents case studies from healthcare, finance, and entertainment. We conclude by discussing challenges and future directions, such as technical hurdles, vendor lock-in, sustainability, and regulatory issues. Put together, this work can serve as a guide for practitioners and researchers looking to adopt cloud-based generative AI solutions, serving as a valuable guide to navigating the intricacies of this evolving field.


Efficient Distributed Training through Gradient Compression with Sparsification and Quantization Techniques

arXiv.org Artificial Intelligence

This study investigates the impact of gradient compression on distributed training performance, focusing on sparsification and quantization techniques, including top-k, DGC, and QSGD. In baseline experiments, random-k compression results in severe performance degradation, highlighting its inefficacy. In contrast, using top-k and DGC at 50 times compression yields performance improvements, reducing perplexity by up to 0.06 compared to baseline. Experiments across 1, 2, and 4 workers demonstrate that conservative sparsification can have a regularizing effect, especially for smaller models, while compression ratios above 5000 times impair performance, particularly for DGC. Communication times are reduced across all compression methods, with top-k and DGC decreasing communication to negligible levels at high compression ratios. However, increased computation times offset this efficiency for top-k due to sorting demands, making it less scalable than DGC or QSGD. In convergence tests, sparsification techniques show accelerated convergence, requiring fewer epochs than the baseline, which has implications for computational savings. Although precision trade-offs emerge, floating point errors are mitigated by compression. This study's findings underscore the need to tune hyperparameters specifically for each compression technique to achieve optimal model performance, especially in distributed training systems.


Digital Twin in Industries: A Comprehensive Survey

arXiv.org Artificial Intelligence

Industrial networks are undergoing rapid transformation driven by the convergence of emerging technologies that are revolutionizing conventional workflows, enhancing operational efficiency, and fundamentally redefining the industrial landscape across diverse sectors. Amidst this revolution, Digital Twin (DT) emerges as a transformative innovation that seamlessly integrates real-world systems with their virtual counterparts, bridging the physical and digital realms. In this article, we present a comprehensive survey of the emerging DT-enabled services and applications across industries, beginning with an overview of DT fundamentals and its components to a discussion of key enabling technologies for DT. Different from literature works, we investigate and analyze the capabilities of DT across a wide range of industrial services, including data sharing, data offloading, integrated sensing and communication, content caching, resource allocation, wireless networking, and metaverse. In particular, we present an in-depth technical discussion of the roles of DT in industrial applications across various domains, including manufacturing, healthcare, transportation, energy, agriculture, space, oil and gas, as well as robotics. Throughout the technical analysis, we delve into real-time data communications between physical and virtual platforms to enable industrial DT networking. Subsequently, we extensively explore and analyze a wide range of major privacy and security issues in DT-based industry. Taxonomy tables and the key research findings from the survey are also given, emphasizing important insights into the significance of DT in industries. Finally, we point out future research directions to spur further research in this promising area.


AI-Native Multi-Access Future Networks -- The REASON Architecture

arXiv.org Artificial Intelligence

The development of the sixth generation of communication networks (6G) has been gaining momentum over the past years, with a target of being introduced by 2030. Several initiatives worldwide are developing innovative solutions and setting the direction for the key features of these networks. Some common emerging themes are the tight integration of AI, the convergence of multiple access technologies and sustainable operation, aiming to meet stringent performance and societal requirements. To that end, we are introducing REASON - Realising Enabling Architectures and Solutions for Open Networks. The REASON project aims to address technical challenges in future network deployments, such as E2E service orchestration, sustainability, security and trust management, and policy management, utilising AI-native principles, considering multiple access technologies and cloud-native solutions. This paper presents REASON's architecture and the identified requirements for future networks. The architecture is meticulously designed for modularity, interoperability, scalability, simplified troubleshooting, flexibility, and enhanced security, taking into consideration current and future standardisation efforts, and the ease of implementation and training. It is structured into four horizontal layers: Physical Infrastructure, Network Service, Knowledge, and End-User Application, complemented by two vertical layers: Management and Orchestration, and E2E Security. This layered approach ensures a robust, adaptable framework to support the diverse and evolving requirements of 6G networks, fostering innovation and facilitating seamless integration of advanced technologies.


Transforming the Hybrid Cloud for Emerging AI Workloads

arXiv.org Artificial Intelligence

This white paper, developed through close collaboration between IBM Research and UIUC researchers within the IIDAI Institute, envisions transforming hybrid cloud systems to meet the growing complexity of AI workloads through innovative, full-stack co-design approaches, emphasizing usability, manageability, affordability, adaptability, efficiency, and scalability. By integrating cutting-edge technologies such as generative and agentic AI, cross-layer automation and optimization, unified control plane, and composable and adaptive system architecture, the proposed framework addresses critical challenges in energy efficiency, performance, and cost-effectiveness. Incorporating quantum computing as it matures will enable quantum-accelerated simulations for materials science, climate modeling, and other high-impact domains. Collaborative efforts between academia and industry are central to this vision, driving advancements in foundation models for material design and climate solutions, scalable multimodal data processing, and enhanced physics-based AI emulators for applications like weather forecasting and carbon sequestration. Research priorities include advancing AI agentic systems, LLM as an Abstraction (LLMaaA), AI model optimization and unified abstractions across heterogeneous infrastructure, end-to-end edge-cloud transformation, efficient programming model, middleware and platform, secure infrastructure, application-adaptive cloud systems, and new quantum-classical collaborative workflows. These ideas and solutions encompass both theoretical and practical research questions, requiring coordinated input and support from the research community. This joint initiative aims to establish hybrid clouds as secure, efficient, and sustainable platforms, fostering breakthroughs in AI-driven applications and scientific discovery across academia, industry, and society.


A Survey on Data Markets

arXiv.org Artificial Intelligence

Data is the new oil of the 21st century. The growing trend of trading data for greater welfare has led to the emergence of data markets. A data market is any mechanism whereby the exchange of data products including datasets and data derivatives takes place as a result of data buyers and data sellers being in contact with one another, either directly or through mediating agents. It serves as a coordinating mechanism by which several functions, including the pricing and the distribution of data as the most important ones, interact to make the value of data fully exploited and enhanced. In this article, we present a comprehensive survey of this important and emerging direction from the aspects of data search, data productization, data transaction, data pricing, revenue allocation as well as privacy, security, and trust issues. We also investigate the government policies and industry status of data markets across different countries and different domains. Finally, we identify the unresolved challenges and discuss possible future directions for the development of data markets.


A generic approach for reactive stateful mitigation of application failures in distributed robotics systems deployed with Kubernetes

arXiv.org Artificial Intelligence

Offloading computationally expensive algorithms to the edge or even cloud offers an attractive option to tackle limitations regarding on-board computational and energy resources of robotic systems. In cloud-native applications deployed with the container management system Kubernetes (K8s), one key problem is ensuring resilience against various types of failures. However, complex robotic systems interacting with the physical world pose a very specific set of challenges and requirements that are not yet covered by failure mitigation approaches from the cloud-native domain. In this paper, we therefore propose a novel approach for robotic system monitoring and stateful, reactive failure mitigation for distributed robotic systems deployed using Kubernetes (K8s) and the Robot Operating System (ROS2). By employing the generic substrate of Behaviour Trees, our approach can be applied to any robotic workload and supports arbitrarily complex monitoring and failure mitigation strategies. We demonstrate the effectiveness and application-agnosticism of our approach on two example applications, namely Autonomous Mobile Robot (AMR) navigation and robotic manipulation in a simulated environment.


AI-Driven Innovations in Modern Cloud Computing

arXiv.org Artificial Intelligence

The world has witnessed rapid technological transformation, past couple of decades and with Advent of Cloud computing the landscape evolved exponentially leading to efficient and scalable application development. Now, the past couple of years the digital ecosystem has brought in numerous innovations with integration of Artificial Intelligence commonly known as AI. This paper explores how AI and cloud computing intersect to deliver transformative capabilities for modernizing applications by providing services and infrastructure. Harnessing the combined potential of both AI & Cloud technologies, technology providers can now exploit intelligent resource management, predictive analytics, automated deployment & scaling with enhanced security leading to offering innovative solutions to their customers. Furthermore, by leveraging such technologies of cloud & AI businesses can reap rich rewards in the form of reducing operational costs and improving service delivery. This paper further addresses challenges associated such as data privacy concerns and how it can be mitigated with robust AI governance frameworks.