Goto

Collaborating Authors

 erl



Ethics Readiness of Artificial Intelligence: A Practical Evaluation Method

Adomaitis, Laurynas, Israel-Jost, Vincent, Grinbaum, Alexei

arXiv.org Artificial Intelligence

In the governance of emerging technologies, ethical guidance has often relied on so-called soft law instruments--codes of conduct, guidelines, or frameworks--designed to promote responsible behavior without imposing binding legal constraints. This is partly due to the difficulty of imposing harmonized regulations across the EU, especially in a global context characterized by strong reservations expressed by other international actors, e.g. the United States of America, with regard to the regulation of artificial intelligence (AI) that "unduly burdens AI innovation" (Kratsios, Sacks, and Rubio 2025) . Another reason is related to the principle, upheld in several member states such as Germany, that protects scientific freedom by constitutional law. Nevertheless, the recent trajectory of technological regulation in the European Union shows that soft law can evolve into hard law: this has been the case, notably, with the adoption of the AI Act (European Commission 2022; Terpan 2015) .



Convergence Theorems for Entropy-Regularized and Distributional Reinforcement Learning

Jhaveri, Yash, Wiltzer, Harley, Shafto, Patrick, Bellemare, Marc G., Meger, David

arXiv.org Artificial Intelligence

In the pursuit of finding an optimal policy, reinforcement learning (RL) methods generally ignore the properties of learned policies apart from their expected return. Thus, even when successful, it is difficult to characterize which policies will be learned and what they will do. In this work, we present a theoretical framework for policy optimization that guarantees convergence to a particular optimal policy, via vanishing entropy regularization and a temperature decoupling gambit. Our approach realizes an interpretable, diversity-preserving optimal policy as the regularization temperature vanishes and ensures the convergence of policy derived objects--value functions and return distributions. In a particular instance of our method, for example, the realized policy samples all optimal actions uniformly. Leveraging our temperature decoupling gambit, we present an algorithm that estimates, to arbitrary accuracy, the return distribution associated to its interpretable, diversity-preserving optimal policy.


Using construction waste hauling trucks' GPS data to classify earthwork-related locations: A Chengdu case study

Yu, Lei, Han, Ke

arXiv.org Artificial Intelligence

Earthwork-related locations (ERLs), such as construction sites, earth dumping ground, and concrete mixing stations, are major sources of urban dust pollution (particulate matters). The effective management of ERLs is crucial and requires timely and efficient tracking of these locations throughout the city. This work aims to identify and classify urban ERLs using GPS trajectory data of over 16,000 construction waste hauling trucks (CWHTs), as well as 58 urban features encompassing geographic, land cover, POI and transport dimensions. We compare several machine learning models and examine the impact of various spatial-temporal features on classification performance using real-world data in Chengdu, China. The results demonstrate that 77.8% classification accuracy can be achieved with a limited number of features. This classification framework was implemented in the Alpha MAPS system in Chengdu, which has successfully identified 724 construction cites/earth dumping ground, 48 concrete mixing stations, and 80 truck parking locations in the city during December 2023, which has enabled local authority to effectively manage urban dust pollution at low personnel costs.


Robustified Learning for Online Optimization with Memory Costs

Li, Pengfei, Yang, Jianyi, Ren, Shaolei

arXiv.org Artificial Intelligence

Online optimization with memory costs has many real-world applications, where sequential actions are made without knowing the future input. Nonetheless, the memory cost couples the actions over time, adding substantial challenges. Conventionally, this problem has been approached by various expert-designed online algorithms with the goal of achieving bounded worst-case competitive ratios, but the resulting average performance is often unsatisfactory. On the other hand, emerging machine learning (ML) based optimizers can improve the average performance, but suffer from the lack of worst-case performance robustness. In this paper, we propose a novel expert-robustified learning (ERL) approach, achieving {both} good average performance and robustness. More concretely, for robustness, ERL introduces a novel projection operator that robustifies ML actions by utilizing an expert online algorithm; for average performance, ERL trains the ML optimizer based on a recurrent architecture by explicitly considering downstream expert robustification. We prove that, for any $\lambda\geq1$, ERL can achieve $\lambda$-competitive against the expert algorithm and $\lambda\cdot C$-competitive against the optimal offline algorithm (where $C$ is the expert's competitive ratio). Additionally, we extend our analysis to a novel setting of multi-step memory costs. Finally, our analysis is supported by empirical experiments for an energy scheduling application.


Evolving Constrained Reinforcement Learning Policy

Hu, Chengpeng, Pei, Jiyuan, Liu, Jialin, Yao, Xin

arXiv.org Artificial Intelligence

Evolutionary algorithms have been used to evolve a population of actors to generate diverse experiences for training reinforcement learning agents, which helps to tackle the temporal credit assignment problem and improves the exploration efficiency. However, when adapting this approach to address constrained problems, balancing the trade-off between the reward and constraint violation is hard. In this paper, we propose a novel evolutionary constrained reinforcement learning (ECRL) algorithm, which adaptively balances the reward and constraint violation with stochastic ranking, and at the same time, restricts the policy's behaviour by maintaining a set of Lagrange relaxation coefficients with a constraint buffer. Extensive experiments on robotic control benchmarks show that our ECRL achieves outstanding performance compared to state-of-the-art algorithms. Ablation analysis shows the benefits of introducing stochastic ranking and constraint buffer.


Recruitment-imitation Mechanism for Evolutionary Reinforcement Learning

Lü, Shuai, Han, Shuai, Zhou, Wenbo, Zhang, Junwei

arXiv.org Artificial Intelligence

Reinforcement learning, evolutionary algorithms and imitation learning are three principal methods to deal with continuous control tasks. Reinforcement learning is sample efficient, yet sensitive to hyper-parameters setting and needs efficient exploration; Evolutionary algorithms are stable, but with low sample efficiency; Imitation learning is both sample efficient and stable, however it requires the guidance of expert data. In this paper, we propose Recruitment-imitation Mechanism (RIM) for evolutionary reinforcement learning, a scalable framework that combines advantages of the three methods mentioned above. The core of this framework is a dual-actors and single critic reinforcement learning agent. This agent can recruit high-fitness actors from the population of evolutionary algorithms, which instructs itself to learn from experience replay buffer. At the same time, low-fitness actors in the evolutionary population can imitate behavior patterns of the reinforcement learning agent and improve their adaptability. Reinforcement and imitation learners in this framework can be replaced with any off-policy actor-critic reinforcement learner or data-driven imitation learner. We evaluate RIM on a series of benchmarks for continuous control tasks in Mujoco. The experimental results show that RIM outperforms prior evolutionary or reinforcement learning methods. The performance of RIM's components is significantly better than components of previous evolutionary reinforcement learning algorithm, and the recruitment using soft update enables reinforcement learning agent to learn faster than that using hard update.


Proximal Distilled Evolutionary Reinforcement Learning

Bodnar, Cristian, Day, Ben, Lio', Pietro

arXiv.org Machine Learning

Reinforcement Learning (RL) has recently achieved tremendous success due to the partnership with Deep Neural Networks (DNNs). Genetic Algorithms (GAs), often seen as a competing approach to RL, have run out of favour due to their inability to scale up to the DNNs required to solve the most complex environments. Contrary to this dichotomic view, in the physical world, evolution and learning are complementary processes that continuously interact. The recently proposed Evolutionary Reinforcement Learning (ERL) framework has demonstrated the capacity of the two methods to enhance each other. However, ERL has not fully addressed the scalability problem of GAs. In this paper, we argue that this problem is rooted in an unfortunate combination of a simple genetic encoding for DNNs and the use of traditional biologically-inspired variation operators. When applied to these encodings, the standard operators are destructive and cause catastrophic forgetting of the traits the networks acquired. We propose a novel algorithm called Proximal Distilled Evolutionary Reinforcement Learning (PDERL) that is characterised by a hierarchical integration between evolution and learning. The main innovation of PDERL is the use of learning-based variation operators that compensate for the simplicity of the genetic representation. Unlike the traditional operators, the ones we propose meet their functional requirements. We evaluate PDERL in five robot locomotion environments from the OpenAI gym. Our method outperforms ERL, as well as two state of the art RL algorithms, PPO and TD3, in all the environments.


Deep Models for Relational Databases

Graham, Devon, Ravanbakhsh, Siamak

arXiv.org Machine Learning

Due to its extensive use in databases, the relational model is ubiquitous in representing big-data. We propose to apply deep learning to this type of relational data by introducing an Equivariant Relational Layer (ERL), a neural network layer derived from the entity-relationship model of the database. Our layer relies on identification of exchangeabilities in the relational data(base), and their expression as a permutation group. We prove that an ERL is an optimal parameter-sharing scheme under the given exchangeability constraints, and subsumes recently introduced deep models for sets, exchangeable tensors, and graphs. The proposed model has a linear complexity in the size of the relational data, and it can be used for both inductive and transductive reasoning in databases, including the prediction of missing records, and database embedding. This opens the door to the application of deep learning to one of the most abundant forms of data.