Smaller, Smarter, Closer: The Edge of Collaborative Generative AI
Morabito, Roberto, Jang, SiYoung
–arXiv.org Artificial Intelligence
--The rapid adoption of generative AI (GenAI), particularly Large Language Models (LLMs), has exposed critical limitations of cloud-centric deployments, including latency, cost, and privacy concerns. Meanwhile, Small Language Models (SLMs) are emerging as viable alternatives for resource-constrained edge environments, though they often lack the capabilities of their larger counterparts. This article explores the potential of collaborative inference systems that leverage both edge and cloud resources to address these challenges. By presenting distinct cooperation strategies alongside practical design principles and experimental insights, we offer actionable guidance for deploying GenAI across the computing continuum. Ultimately, this work underscores the great potential of edge-first approaches in realizing the promise of GenAI in diverse, real-world applications. It is no longer necessary to elaborate extensively on the transformative impact of generative AI (GenAI) models, particularly Large Language Models (LLMs), across various sectors of society. From healthcare to education, entertainment to software development and IoT [1], it is evident that nearly every application domain is ready (or already is) to be influenced by these technologies. LLMs like GPT -4, powered by transformer architectures with billions of parameters, excel in diverse NLP tasks (e.g., summarization, translation, query answering) and high-level reasoning.
arXiv.org Artificial Intelligence
May-30-2025
- Country:
- Europe
- France (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Europe
- Genre:
- Research Report (0.40)
- Industry:
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (1.00)
- Technology: