Fernández, Fernando
Similarity metrics for Different Market Scenarios in Abides
Pino, Diego, García, Javier, Fernández, Fernando, Vyetrenko, Svitlana S
Markov Decision Processes (MDPs) are an effective way to formally describe many Machine Learning problems. In fact, recently MDPs have also emerged as a powerful framework to model financial trading tasks. For example, financial MDPs can model different market scenarios. However, the learning of a (near-)optimal policy for each of these financial MDPs can be a very time-consuming process, especially when nothing is known about the policy to begin with. An alternative approach is to find a similar financial MDP for which we have already learned its policy, and then reuse such policy in the learning of a new policy for a new financial MDP. Such a knowledge transfer between market scenarios raises several issues. On the one hand, how to measure the similarity between financial MDPs. On the other hand, how to use this similarity measurement to effectively transfer the knowledge between financial MDPs. This paper addresses both of these issues. Regarding the first one, this paper analyzes the use of three similarity metrics based on conceptual, structural and performance aspects of the financial MDPs. Regarding the second one, this paper uses Probabilistic Policy Reuse to balance the exploitation/exploration in the learning of a new financial MDP according to the similarity of the previous financial MDPs whose knowledge is reused.
Disturbing Reinforcement Learning Agents with Corrupted Rewards
Majadas, Rubén, García, Javier, Fernández, Fernando
Reinforcement Learning (RL) algorithms have led to recent successes in solving complex games, such as Atari or Starcraft, and to a huge impact in real-world applications, such as cybersecurity or autonomous driving. In the side of the drawbacks, recent works have shown how the performance of RL algorithms decreases under the influence of soft changes in the reward function. However, little work has been done about how sensitive these disturbances are depending on the aggressiveness of the attack and the learning exploration strategy. In this paper, we propose to fill this gap in the literature analyzing the effects of different attack strategies based on reward perturbations, and studying the effect in the learner depending on its exploration strategy. In order to explain all the behaviors, we choose a sub-class of MDPs: episodic, stochastic goal-only-rewards MDPs, and in particular, an intelligible grid domain as a benchmark. In this domain, we demonstrate that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards. Finally, in the proposed learning scenario, a counterintuitive result arises: attacking at each learning episode is the lowest cost attack strategy.
Autonomous Mobile Robot Control and Learning with the PELEA Architecture
Quintero, Ezequiel (Universidad Carlos III de Madrid) | Alcázar, Vidal (Universidad Carlos III de Madrid) | Borrajo, Daniel (Universidad Carlos III de Madrid) | Fdez-Olivares, Juan (Universidad de Granada) | Fernández, Fernando (Universidad Carlos III de Madrid) | García-Olaya, Ángel (Universidad Carlos III de Madrid) | Guzmán, César (Universidad Politecnica de Valencia) | Onaindía, Eva (Universidad Politecnica de Valencia) | Prior, David (Universidad de Granada)
In this paper we describe the integration of a robot control platform (Player/Stage) and a real robot (Pioneer P3DX) with PELEA (Planning, Execution and LEarning Architecture). PELEA is a general-purpose planning architecture suitable for a wide range of real world applications, from robotics to emergency management. It allows planning engineers to generate planning applications since it integrates planning, execution, replanning, monitoring and learning capabilities. We also present a relational learning approach for automatically modeling robot-action execution durations, with the purpose of improving the planning process of PELEA by refining domain definitions.