Not enough data to create a plot.
Try a different view from the menu above.
Papadakis, Panagiotis
Leveraging Event Streams with Deep Reinforcement Learning for End-to-End UAV Tracking
Souissi, Ala, Fradi, Hajer, Papadakis, Panagiotis
In this paper, we present our proposed approach for active tracking to increase the autonomy of Unmanned Aerial Vehicles (UAVs) using event cameras, low-energy imaging sensors that offer significant advantages in speed and dynamic range. The proposed tracking controller is designed to respond to visual feedback from the mounted event sensor, adjusting the drone movements to follow the target. To leverage the full motion capabilities of a quadrotor and the unique properties of event sensors, we propose an end-to-end deep-reinforcement learning (DRL) framework that maps raw sensor data from event streams directly to control actions for the UAV. To learn an optimal policy under highly variable and challenging conditions, we opt for a simulation environment with domain randomization for effective transfer to real-world environments. We demonstrate the effectiveness of our approach through experiments in challenging scenarios, including fast-moving targets and changing lighting conditions, which result in improved generalization capabilities.
Feature Expansion and enhanced Compression for Class Incremental Learning
Ferdinand, Quentin, Chenadec, Gilles Le, Clement, Benoit, Papadakis, Panagiotis, Oliveau, Quentin
Class incremental learning consists in training discriminative models to classify an increasing number of classes over time. However, doing so using only the newly added class data leads to the known problem of catastrophic forgetting of the previous classes. Recently, dynamic deep learning architectures have been shown to exhibit a better stability-plasticity trade-off by dynamically adding new feature extractors to the model in order to learn new classes followed by a compression step to scale the model back to its original size, thus avoiding a growing number of parameters. In this context, we propose a new algorithm that enhances the compression of previous class knowledge by cutting and mixing patches of previous class samples with the new images during compression using our Rehearsal-CutMix method. We show that this new data augmentation reduces catastrophic forgetting by specifically targeting past class information and improving its compression. Extensive experiments performed on the CIFAR and ImageNet datasets under diverse incremental learning evaluation protocols demonstrate that our approach consistently outperforms the state-of-the-art . The code will be made available upon publication of our work.
Reports of the AAAI 2011 Fall Symposia
Blisard, Sam (Naval Research Laboratory) | Carmichael, Ted (University of North Carolina at Charlotte) | Ding, Li (University of Maryland, Baltimore County) | Finin, Tim (University of Maryland, Baltimore County) | Frost, Wende (Naval Research Laboratory) | Graesser, Arthur (University of Memphis) | Hadzikadic, Mirsad (University of North Carolina at Charlotte) | Kagal, Lalana (Massachusetts Institute of Technology) | Kruijff, Geert-Jan M. (German Research Center for Artificial Intelligence) | Langley, Pat (Arizona State University) | Lester, James (North Carolina State University) | McGuinness, Deborah L. (Rensselaer Polytechnic Institute) | Mostow, Jack (Carnegie Mellon University) | Papadakis, Panagiotis (University of Sapienza, Rome) | Pirri, Fiora (Sapienza University of Rome) | Prasad, Rashmi (University of Wisconsin-Milwaukee) | Stoyanchev, Svetlana (Columbia University) | Varakantham, Pradeep (Singapore Management University)
The Association for the Advancement of Artificial Intelligence was pleased to present the 2011 Fall Symposium Series, held Friday through Sunday, November 4–6, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the seven symposia are as follows: (1) Advances in Cognitive Systems; (2) Building Representations of Common Ground with Intelligent Agents; (3) Complex Adaptive Systems: Energy, Information and Intelligence; (4) Multiagent Coordination under Uncertainty; (5) Open Government Knowledge: AI Opportunities and Challenges; (6) Question Generation; and (7) Robot-Human Teamwork in Dynamic Adverse Environment. The highlights of each symposium are presented in this report.
Reports of the AAAI 2011 Fall Symposia
Blisard, Sam (Naval Research Laboratory) | Carmichael, Ted (University of North Carolina at Charlotte) | Ding, Li (University of Maryland, Baltimore County) | Finin, Tim (University of Maryland, Baltimore County) | Frost, Wende (Naval Research Laboratory) | Graesser, Arthur (University of Memphis) | Hadzikadic, Mirsad (University of North Carolina at Charlotte) | Kagal, Lalana (Massachusetts Institute of Technology) | Kruijff, Geert-Jan M. (German Research Center for Artificial Intelligence) | Langley, Pat (Arizona State University) | Lester, James (North Carolina State University) | McGuinness, Deborah L. (Rensselaer Polytechnic Institute) | Mostow, Jack (Carnegie Mellon University) | Papadakis, Panagiotis (University of Sapienza, Rome) | Pirri, Fiora (Sapienza University of Rome) | Prasad, Rashmi (University of Wisconsin-Milwaukee) | Stoyanchev, Svetlana (Columbia University) | Varakantham, Pradeep (Singapore Management University)
The Association for the Advancement of Artificial Intelligence was pleased to present the 2011 Fall Symposium Series, held Friday through Sunday, November 4–6, at the Westin Arlington Gateway in Arlington, Virginia. The titles of the seven symposia are as follows: (1) Advances in Cognitive Systems; (2) Building Representations of Common Ground with Intelligent Agents; (3) Complex Adaptive Systems: Energy, Information and Intelligence; (4) Multiagent Coordination under Uncertainty; (5) Open Government Knowledge: AI Opportunities and Challenges; (6) Question Generation; and (7) Robot-Human Teamwork in Dynamic Adverse Environment. The highlights of each symposium are presented in this report.
Awareness in Mixed Initiative Planning
Gianni, Mario (Sapienza University of Rome) | Papadakis, Panagiotis (Sapienza University of Rome) | Pirri, Fiora (Sapienza University of Rome) | Pizzoli, Matia (Sapienza University of Rome)
For tasks that need to be accomplished in unconstrained environments, as in the case of Urban Search and Rescue (USAR), human-robot collaboration is considered as an indispensable component. Collaboration is based on accurate models of robot and human perception consistent with one another, so that exchange of information critical to the accomplishment of a task is performed efficiently and in a simplified fashion to minimize the interaction overhead. In this paper, we highlight the features of a human-robot team, i.e. how robot perception may be combined with human perception based on a task-driven direction for USAR. We elaborate on the design of the components of a mixed-initiative system wherein a task assigned to the robot is planned and executed jointly with the human operator as a result of their interaction. Our description is solidified by demonstrating the application of mixed-initiative planning in a number of examples related to the morphological adaptation of the rescue robot.
A Unified Framework for Planning and Execution-Monitoring of Mobile Robots
Gianni, Mario (University of Rome "La Sapienza) | Papadakis, Panagiotis (University of Rome "La Sapienza) | Pirri, Fiora (University of Rome "La Sapienza") | Liu, Ming (Swiss Federal Institute of Technology,) | Pomerleau, Francois (Swiss Federal Institute of Technology,) | Colas, Francis (Swiss Federal Institute of Technology, Zurich) | Zimmermann, Karel (Czech Technical University, Prague) | Svoboda, Tomas (Czech Technical University, Prague) | Petricek, Tomas (Czech Technical University, Prague) | Kruijff, Geert (German Research Center for Artificial Intelligence) | Khambhaita, Harmish (German Research Center for Artificial Intelligence) | Zender, Hendrik (German Research Center for Artificial Intelligence)
We present an original integration of high level planning and execution with incoming perceptual information from vision, SLAM, topological map segmentation and dialogue. The task of the robot system, implementing the integrated model, is to explore unknown areas and report detected objects to an operator, by speaking loudly. The knowledge base of the planner maintains a graph-based representation of the metric map that is dynamically constructed via an unsupervised topological segmentation method, and augmented with information about the type and position of detected objects, within the map, such as cars or containers. According to this knowledge the cognitive robot can infer strategies in so generating parametric plans that are instantiated from the perceptual processes. Finally, a model-based approach for the execution and control of the robot system is proposed to monitor, concurrently, the low level status of the system and the execution of the activities, in order to achieve the goal, instructed by the operator.
Help Me to Help You: How to Learn Intentions, Actions and Plans
Khambhaita, Harmish (DFKI GmbH Saarbruecken) | Kruijff, Geert-Jan (DFKI GmbH Saarbruecken) | Mancas, Matei (University of Mons) | Gianni, Mario (Sapienza University of Rome) | Papadakis, Panagiotis (Sapienza University of Rome) | Pirri, Fiora (Sapienza University of Rome) | Pizzoli, Matia (Sapienza University of Rome)
The collaboration between a human and a robot is here understood as a learning process mediated by the instructor prompt behaviours and the apprentice collecting information from them to learn a plan. The instructor wears the Gaze Machine, a wearable device gathering and conveying visual and audio input from the instructor while executing a task. The robot, on the other hand, is eager to learn both the best sequence of actions, their timing and how they interlace. The cross relation among actions is specified both in terms of time intervals for their execution, and in terms of location in space to cope with the instruction interaction with people and objects in the scene. We outline this process: how to transform the rich information delivered by the Gaze Machine into a plan. Specifically, how to obtain a map of the instructor positions and his gaze position, via visual slam and gaze fixations; further, how to obtain an action map from the running commentaries and the topological maps and, finally, how to obtain a temporal net of the relevant actions that have been extracted. The learned structure is then managed by the flexible time paradigm of flexible planning in the Situation Calculus for execution monitoring and plan generation.