Sensors provide valuable data about physical magnitudes and environmental phenomena. However, the translation of these data into concrete actions requires processing the inputs that may come from one or many types of sensors, including sensor networks. Such processing can benefit from Artificial Intelligence (AI), and the use of machine learning, neural networks (including deep architectures), and information fusion methods have been common in this field. Currently, these concepts can be applied in different IoT architectures, where there are sensor and actuator nodes that communicate and create the networks. These types of networks tend to be autonomous networks that adapt to several conditions, creating smart IoT networks.
As is well known, machine learning (ML) is one of the main branches of artificial intelligence (AI). Its primary objective is to use computational methods to extract information from data. Machine learning has a wide spectrum of practical applications. After the first applications concerning topics such as recognition of manual writing, detection of objects in image processing, voice recognition, medical diagnoses, DNA classification, search engines, and stock market analysis, in recent years machine learning algorithms have been increasingly used in environmental sciences due to their high capability for modelling non-linear phenomena. In particular, these algorithms are already widely used in weather and climate forecasts, as well as in the analysis and modelling of hydrological, ecological, and oceanographic data.
This Special Issue is devoted to the recent advances in prediction models. Novel methods, new applications, comparative analyses of models, case studies, and state-of-the-art review papers are particularly welcomed. Prediction models are essential to many scientific domains and are gaining widespread popularity. Health care, cybersecurity, education, credit card fraud detection, social media, cloud computing, software measurement, quality and defect simulation, cost and effort estimations, software reuse and evaluation, computational mechanics, theoretical physics, astrophysics, materials design innovation, disease diagnosis, hydrological modeling, earth systems, atmospheric sciences, weather and extreme events prediction, hazard mapping, natural disasters warning systems, policy-making, energy systems, time-series forecasting, and climate change modeling are among the popular applications of prediction models in the literature. The beneficial aspects and the generalizability of prediction models in various technological and scientific domains have highly increased the progression, competitiveness, and research impact of different fields.
Deep reinforcement learning is the combination of reinforcement learning (RL) and deep learning. This field of research has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine. Thus, deep RL opens up many new applications in domains such as healthcare, robotics, smart grids, finance, and many more. This paper provides an introduction to deep reinforcement learning models, algorithms and techniques. Particular focus is on the aspects related to generalization and how deep RL can be used for practical applications. The reader is assumed to be familiar with basic machine learning concepts.
Deep learning has been popular in artificial intelligence with many applications due to great successes in many perceptual tasks (e.g., object detection, image understanding, and speech recognition). Moreover, deep learning is also critical in data science, especially for big data analytics relying on extracting high-level and complex abstractions as data representations based on a hierarchical learning process. In realizing deep learning, supervised and unsupervised approaches for training deep architectures have been empirically investigated based on the adoption of parallel computing facilities such as GPUs or CPU clusters. However, there is still limited understanding of why deep architectures work so well and how to design computationally efficient training algorithms and hardware acceleration techniques. At the same time, the number of end devices, such as IoT (Internet of Things) devices, has dramatically increased.