Distributed Architectures
Roadmap for Edge AI: A Dagstuhl Perspective
Ding, Aaron Yi, Peltonen, Ella, Meuser, Tobias, Aral, Atakan, Becker, Christian, Dustdar, Schahram, Hiessl, Thomas, Kranzlmuller, Dieter, Liyanage, Madhusanka, Magshudi, Setareh, Mohan, Nitinder, Ott, Joerg, Rellermeyer, Jan S., Schulte, Stefan, Schulzrinne, Henning, Solmaz, Gurkan, Tarkoma, Sasu, Varghese, Blesson, Wolf, Lars
Based on the collective input of Dagstuhl Seminar (21342), this paper presents a comprehensive discussion on AI methods and capabilities in the context of edge computing, referred as Edge AI. In a nutshell, we envision Edge AI to provide adaptation for data-driven applications, enhance network and radio access, and allow the creation, optimization, and deployment of distributed AI/ML pipelines with given quality of experience, trust, security and privacy targets. The Edge AI community investigates novel ML methods for the edge computing environment, spanning multiple sub-fields of computer science, engineering and ICT. The goal is to share an envisioned roadmap that can bring together key actors and enablers to further advance the domain of Edge AI.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- (4 more...)
- Information Technology > Security & Privacy (1.00)
- Energy (1.00)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.34)
Distributed Artificial Intelligence
Let's start from the broader classification. Distributed Artificial Intelligence (DAI) is a class of technologies and methods that span from swarm intelligence to multi-agent technologies and that basically concerns the development of distributed solutions for a specific problem. It can mainly be used for learning, reasoning, and planning, and it is one of the subsets of AI where simulation has a way greater importance than point-prediction. In this class of systems, autonomous learning processing agents (distributed at large scale and independent) reach conclusions or a semi-equilibrium through interaction and communication (even asynchronously). One of the big benefits of those with respect to neural networks is that they do not require the same amount of data to work -- far to say though these are simple systems.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.92)
Pervasive AI for IoT Applications: Resource-efficient Distributed Artificial Intelligence
Baccour, Emna, Mhaisen, Naram, Abdellatif, Alaa Awad, Erbad, Aiman, Mohamed, Amr, Hamdi, Mounir, Guizani, Mohsen
Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services, spanning from recommendation systems to robotics control and military surveillance. This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams. Designing accurate models using such data streams, to predict future insights and revolutionize the decision-taking process, inaugurates pervasive systems as a worthy paradigm for a better quality-of-life. The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges. In this context, a wise cooperation and resource scheduling should be envisaged among IoT devices (e.g., smartphones, smart vehicles) and infrastructure (e.g. edge nodes, and base stations) to avoid communication and computation overheads and ensure maximum performance. In this paper, we conduct a comprehensive survey of the recent techniques developed to overcome these resource challenges in pervasive AI systems. Specifically, we first present an overview of the pervasive computing, its architecture, and its intersection with artificial intelligence. We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and online learning, running in a ubiquitous system. Next, we provide a deep literature review of communication-efficient techniques, from both algorithmic and system perspectives, of distributed inference, training and online learning tasks across the combination of IoT devices, edge devices and cloud servers. Finally, we discuss our future vision and research challenges.
- North America > United States > New York (0.27)
- Asia > Middle East > Qatar (0.14)
- Europe > Netherlands (0.14)
- (6 more...)
- Research Report (1.00)
- Overview (1.00)
- Telecommunications (1.00)
- Information Technology > Smart Houses & Appliances (1.00)
- Information Technology > Services (1.00)
- (8 more...)
Analyzing Power Grid, ICT, and Market Without Domain Knowledge Using Distributed Artificial Intelligence
Veith, Eric MSP, Balduin, Stephan, Wenninghoff, Nils, Tröschel, Martin, Fischer, Lars, Nieße, Astrid, Wolgast, Thomas, Sethmann, Richard, Fraune, Bastian, Woltjen, Torben
Modern cyber-physical systems (CPS), such as our energy infrastructure, are becoming increasingly complex: An ever-higher share of Artificial Intelligence (AI)-based technologies use the Information and Communication Technology (ICT) facet of energy systems for operation optimization, cost efficiency, and to reach CO2 goals worldwide. At the same time, markets with increased flexibility and ever shorter trade horizons enable the multi-stakeholder situation that is emerging in this setting. These systems still form critical infrastructures that need to perform with highest reliability. However, today's CPS are becoming too complex to be analyzed in the traditional monolithic approach, where each domain, e.g., power grid and ICT as well as the energy market, are considered as separate entities while ignoring dependencies and side-effects. To achieve an overall analysis, we introduce the concept for an application of distributed artificial intelligence as a self-adaptive analysis tool that is able to analyze the dependencies between domains in CPS by attacking them. It eschews pre-configured domain knowledge, instead exploring the CPS domains for emergent risk situations and exploitable loopholes in codices, with a focus on rational market actors that exploit the system while still following the market rules.
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- Europe > Ukraine (0.04)
- Europe > Germany > Berlin (0.04)
- (6 more...)
- Information Technology > Security & Privacy (1.00)
- Energy > Power Industry (1.00)
- Energy > Renewable > Wind (0.46)
- Energy > Renewable > Solar (0.46)
- Information Technology > Communications > Networks (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.84)
New Research Shows How AI Can Act as Mediators
According to VentureBeat, AI researchers at Uber have recently posted a paper to Arxiv outlining a new platform intended to assist in the creation of distributed AI models. The platform is called Fiber, and it can be used to drive both reinforcement learning tasks and population-based learning. Fiber is designed to make large-scale parallel computation more accessible to non-experts, letting them take advantage of the power of distributed AI algorithms and models. Fiber has recently been made open-source on GitHub, and it's compatible with Python 3.6 or above, with Kubernetes running on a Linux system and running in a cloud environment. According to the team of researchers, the platform is capable of easily scaling up to hundreds or thousands of individual machines.
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
Uber details Fiber, a framework for distributed AI model training
A preprint paper coauthored by Uber AI scientists and Jeff Clune, a research team leader at San Francisco startup OpenAI, describes Fiber, an AI development and distributed training platform for methods including reinforcement learning (which spurs AI agents to complete goals via rewards) and population-based learning. The team says that Fiber expands the accessibility of large-scale parallel computation without the need for specialized hardware or equipment, enabling non-experts to reap the benefits of genetic algorithms in which populations of agents evolve rather than individual members. Fiber -- which was developed to power large-scale parallel scientific computation projects like POET -- is available in open source as of this week, on Github. It supports Linux systems running Python 3.6 and up and Kubernetes running on public cloud environments like Google Cloud, and the research team says that it can scale to hundreds or even thousands of machines. As the researchers point out, increasing computation underlies many recent advances in machine learning, with more and more algorithms relying on distributed training for processing an enormous amount of data.
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.41)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.40)
Climate, Security and Migration: Using Advanced Distributed AI Models
The Paris Agreement is a milestone, and all parties should take concrete actions to fulfill their commitments. Institutions and processes must be adapted for tomorrow's changes and equipped to take up all the challenges of climate change. There is a shared responsibility to prepare for climate impacts on security and migration. By 2030, more than 60% of the world's poor will live in fragile and crisis contexts. Assessing and anticipating climate risks in the most fragile situations should be a priority.
XAIN Puts AI Privacy First, at No Cost to Efficiency, with its Distributed AI Solution - insideBIGDATA
XAIN, the AI startup that specializes in privacy-oriented Federated Machine Learning (FedML), is developing an infrastructure to train artificial intelligence applications through FedML technology, a mechanism that emphasizes data privacy. XAIN's distributed approach to machine learning, which intends to comply with the European Commission's General Data Protection Regulations (GDPR), also provides greater efficiency in the way data is trained, marking a major breakthrough in a field otherwise burdened by costly and onerous processes. When you download facial recognition software onto your phone, your data is usually stored on the central database of the app providing the service. FaceApp, for instance, infuriated the public recently for storing data centrally, though they're far from the first AI-based app to lack privacy protection measures. Data aggregation is essential for AI technology to work -- the question is how to preserve privacy throughout the process.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (0.66)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.40)
Distributed Artificial Intelligence
Let's start from the broader classification. Distributed Artificial Intelligence (DAI) is a class of technologies and methods that span from swarm intelligence to multi-agent technologies and that basically concerns the development of distributed solutions for a specific problem. It can mainly be used for learning, reasoning, and planning, and it is one of the subsets of AI where simulation has a way greater importance than point-prediction. In this class of systems, autonomous learning processing agents (distributed at large scale and independent) reach conclusions or a semi-equilibrium through interaction and communication (even asynchronously). One of the big benefits of those with respect to neural networks is that they do not require the same amount of data to work -- far to say though these are simple systems.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Systems & Languages > Distributed Architectures (0.93)
Top IT predictions in APAC in 2019
The growing use of AI will increase data usage exponentially. As part of Singapore's smart nation initiative, the government has planned to invest up to S$150m from the National Research Foundation on AI over five years through the AI Singapore programme. While first-generation AI architectures have historically been centralised, Equinix predicts that enterprises will enter the realm of distributed AI architectures, where AI model building and model inferencing will take place at the edge, physically closer to the origin source of the data. To access more external data sources for accurate predictions, enterprises will turn to secure data transaction marketplaces. They will also strive to leverage AI innovation in multiple public clouds without getting locked into a single cloud, further decentralising AI architectures.