wireless network


How AI Could Change Healthcare - 7SIGNAL How AI Could Change Healthcare

#artificialintelligence

The advent of commonplace artificial intelligence in healthcare is only a matter of time. Over half of health professionals feel the use of AI will be widespread within the next four years, while the market for healthcare AI is expected to reach $34 billion by 2025; a huge leap from the $1 billion of 2017. Progress is inevitable, but both patients and physicians have their concerns. An Intel survey found that 30 percent of clinicians don't trust AI and neither do 36 percent of patients. Conversely, other data shows patient trust of AI is on the rise, even when it's addressing the most serious conditions.


A Joint Learning and Communications Framework for Federated Learning over Wireless Networks

arXiv.org Machine Learning

In this paper, the problem of training federated learning (FL) algorithms over a realistic wireless network is studied. In particular, in the considered model, wireless users execute an FL algorithm while training their local FL models using their own data and transmitting the trained local FL models to a base station (BS) that will generate a global FL model and send it back to the users. Since all training parameters are transmitted over wireless links, the quality of the training will be affected by wireless factors such as packet errors and the availability of wireless resources. Meanwhile, due to the limited wireless bandwidth, the BS must select an appropriate subset of users to execute the FL algorithm so as to build a global FL model accurately. This joint learning, wireless resource allocation, and user selection problem is formulated as an optimization problem whose goal is to minimize an FL loss function that captures the performance of the FL algorithm. To address this problem, a closed-form expression for the expected convergence rate of the FL algorithm is first derived to quantify the impact of wireless factors on FL. Then, based on the expected convergence rate of the FL algorithm, the optimal transmit power for each user is derived, under a given user selection and uplink resource block (RB) allocation scheme. Finally, the user selection and uplink RB allocation is optimized so as to minimize the FL loss function. Simulation results show that the proposed joint federated learning and communication framework can reduce the FL loss function value by up to 10% and 16%, respectively, compared to: 1) An optimal user selection algorithm with random resource allocation and 2) a standard FL algorithm with random user selection and resource allocation.


AI IN TELECOMMUNICATIONS: Why carriers could lose out if they don't adopt AI fast -- and where they can make the biggest gains

#artificialintelligence

Making matters worse, improvements in infrastructure and technology have made telecoms largely comparable in terms of coverage, connection speeds, and service pricing, meaning companies must transform their businesses if they hope to compete. For many global telecoms, shoring up market share under today's pressures while also future-proofing operations means having to invest in AI. The telecom industry is expected to invest $36.7 billion annually in AI software, hardware, and services by 2025, according to Tractica. Through its ability to parse large data sets in a contextual manner, provide requested information or analysis, and trigger actions, AI can help telecoms cut costs and streamline by digitizing their operations. In practice, this means leveraging the increasingly vast gold mine of data generated by customers that passes through wireless networks -- the amount of data that moves through AT&T's wireless network has increased 470,000% since 2007, for example.


Federated Learning for Wireless Communications: Motivation, Opportunities and Challenges

arXiv.org Machine Learning

There is a growing interest in the wireless communications community to complement the traditional model-based design approaches with data-driven machine learning (ML)-based solutions. While conventional ML approaches rely on the assumption of having the data and processing heads in a central entity, this is not always feasible in wireless communications applications because of the inaccessibility of private data and large communication overhead required to transmit raw data to central ML processors. As a result, decentralized ML approaches that keep the data where it is generated are much more appealing. Owing to its privacy-preserving nature, federated learning is particularly relevant for many wireless applications, especially in the context of fifth generation (5G) networks. In this article, we provide an accessible introduction to the general idea of federated learning, discuss several possible applications in 5G networks, and describe key technical challenges and open problems for future research on federated learning in the context of wireless communications.


Data Analysis of Wireless Networks Using Classification Techniques

arXiv.org Machine Learning

In the last decade, there has been a great technological advance in the infrastructure of mobile technologies. The increase in the use of wireless local area networks and the use of satellite services are also noticed. The high utilization rate of mobile devices for various purposes makes clear the need to track wireless networks to ensure the integrity and confidentiality of the information transmitted. Therefore, it is necessary to quickly and efficiently identify the normal and abnormal traffic of such networks, so that administrators can take action. This work aims to analyze classification techniques in relation to data from Wireless Networks, using some classes of anomalies pre-established according to some defined criteria of the MAC layer. For data analysis, WEKA Data Mining software (Waikato Environment for Knowledge Analysis) is used. The classification algorithms present a success rate in the classification of viable data, being indicated in the use of intrusion detection systems for wireless networks.


Machine Learning for Intelligent Authentication in 5G-and-Beyond Wireless Networks

arXiv.org Machine Learning

The fifth generation (5G) and beyond wireless networks are critical to support diverse vertical applications by connecting heterogeneous devices and machines, which directly increase vulnerability for various spoofing attacks. Conventional cryptographic and physical layer authentication techniques are facing some challenges in complex dynamic wireless environments, including significant security overhead, low reliability, as well as difficulty in pre-designing authentication model, providing continuous protections, and learning time-varying attributes. In this article, we envision new authentication approaches based on machine learning techniques by opportunistically leveraging physical layer attributes, and introduce intelligence to authentication for more efficient security provisioning. Machine learning paradigms for intelligent authentication design are presented, namely for parametric/non-parametric and supervised/unsupervised/reinforcement learning algorithms. In a nutshell, the machine learning-based intelligent authentication approaches utilize specific features in the multi-dimensional domain for achieving cost-effective, more reliable, model-free, continuous and situation-aware device validation under unknown network conditions and unpredictable dynamics.


Reinforcement Learning-Based Trajectory Design for the Aerial Base Stations

arXiv.org Artificial Intelligence

In this paper, the trajectory optimization problem for a multi-aerial base station (ABS) communication network is investigated. The objective is to find the trajectory of the ABSs so that the sum-rate of the users served by each ABS is maximized. To reach this goal, along with the optimal trajectory design, optimal power and sub-channel allocation is also of great importance to support the users with the highest possible data rates. To solve this complicated problem, we divide it into two sub-problems: ABS trajectory optimization sub-problem, and joint power and sub-channel assignment sub-problem. Then, based on the Q-learning method, we develop a distributed algorithm which solves these sub-problems efficiently, and does not need significant amount of information exchange between the ABSs and the core network. Simulation results show that although Q-learning is a model-free reinforcement learning technique, it has a remarkable capability to train the ABSs to optimize their trajectories based on the received reward signals, which carry decent information from the topology of the network.


Artificial Neural Networks-Based Machine Learning for Wireless Networks: A Tutorial

arXiv.org Artificial Intelligence

Next-generation wireless networks must support ultra-reliable, low-latency communication and intelligently manage a massive number of Internet of Things (IoT) devices in real-time, within a highly dynamic environment. This need for stringent communication quality-of-service (QoS) requirements as well as mobile edge and core intelligence can only be realized by integrating fundamental notions of artificial intelligence (AI) and machine learning across the wireless infrastructure and end-user devices. In this context, this paper provides a comprehensive tutorial that introduces the main concepts of machine learning, in general, and artificial neural networks (ANNs), in particular, and their potential applications in wireless communications. For this purpose, we present a comprehensive overview on a number of key types of neural networks that include feed-forward, recurrent, spiking, and deep neural networks. For each type of neural network, we present the basic architecture and training procedure, as well as the associated challenges and opportunities. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality and edge caching.For each individual application, we present the main motivation for using ANNs along with the associated challenges while also providing a detailed example for a use case scenario and outlining future works that can be addressed using ANNs. In a nutshell, this article constitutes one of the first holistic tutorials on the development of machine learning techniques tailored to the needs of future wireless networks.


When Multiple Agents Learn to Schedule: A Distributed Radio Resource Management Framework

arXiv.org Machine Learning

Interference among concurrent transmissions in a wireless network is a key factor limiting the system performance. One way to alleviate this problem is to manage the radio resources in order to maximize either the average or the worst-case performance. However, joint consideration of both metrics is often neglected as they are competing in nature. In this article, a mechanism for radio resource management using multi-agent deep reinforcement learning (RL) is proposed, which strikes the right trade-off between maximizing the average and the $5^{th}$ percentile user throughput. Each transmitter in the network is equipped with a deep RL agent, receiving partial observations from the network (e.g., channel quality, interference level, etc.) and deciding whether to be active or inactive at each scheduling interval for given radio resources, a process referred to as link scheduling. Based on the actions of all agents, the network emits a reward to the agents, indicating how good their joint decisions were. The proposed framework enables the agents to make decisions in a distributed manner, and the reward is designed in such a way that the agents strive to guarantee a minimum performance, leading to a fair resource allocation among all users across the network. Simulation results demonstrate the superiority of our approach compared to decentralized baselines in terms of average and $5^{th}$ percentile user throughput, while achieving performance close to that of a centralized exhaustive search approach. Moreover, the proposed framework is robust to mismatches between training and testing scenarios. In particular, it is shown that an agent trained on a network with low transmitter density maintains its performance and outperforms the baselines when deployed in a network with a higher transmitter density.


AI – To infinity and beyond? Let's focus on wireless networks and cybersecurity first – Tech Check News

#artificialintelligence

Artificial Intelligence (AI) is deemed to be one of the biggest technological innovations of this decade. However, like with all innovations, we must focus on fundamental applications first before we quite literally reach for the stars. AI has huge potential for wireless networks and for the people that must protect them, as well as those who try and attack them. So how will AI come into play this year and how will it shape the future? Let's focus on wireless networks and cybersecurity first