The first one is called an engagement interaction. Those are the interactions within circles of trust, family, friends, close co-workers, etc. Higher frequency of engagement interactions are a predictor of productivity, as it helps coordinating the behavior of a group. The second type of interaction is called an exploration interaction, referring to times when we expose ourselves to people outside of our regular circles. This is generally how we learn new ideas. Having more exploration interactions can be a predictor of the level of innovation of a company or team.
Recent works on multi-agent sequential decision making using decentralized partially observable Markov decision processes have been concerned with interaction-oriented resolution techniques and provide promising results. These techniques take advantage of local interactions and coordination. In this paper, we propose an approach based on an interaction-oriented resolution of decentralized decision makers. To this end, distributed value functions (DVF) have been used by decoupling the multi-agent problem into a set of individual agent problems. However existing DVF techniques assume permanent and free communication between the agents. In this paper, we extend the DVF methodology to address full local observability, limited share of information and communication breaks. We apply our new DVF in a real-world application consisting of multi-robot exploration where each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team even under communication constraints. Our technique has been implemented and evaluated in simulation and in real-world scenarios during a robotic challenge for the exploration and mapping of an unknown environment. Experimental results from real-world scenarios and from the challenge are given where our system was vice-champion.
Space exploration game Astroneer has racked up a major following in its Early Access stage. After hinting at it, the game finally has an official release date. The title from System Era Softworks will be available for Xbox One and Windows 10 starting on February 6th, 2019. It will run you $29.99 at launch. If you aren't one of the two million players to take a crack at the early stages version of Astroneer, here's what to expect: The interplanetary survival sandbox game is a sort of mix of Minecraft and fellow space exploration game No Man's Sky.
Humans learn to play video games significantly faster than state-of-the-art reinforcement learning (RL) algorithms. Inspired by this, we introduce strategic object oriented reinforcement learning (SOORL) to learn simple dynamics model through automatic model selection and perform efficient planning with strategic exploration. We compare different exploration strategies in a model-based setting in which exact planning is impossible. Additionally, we test our approach on perhaps the hardest Atari game Pitfall! and achieve significantly improved exploration and performance over prior methods.
Popular media has spawned a recent interest in teaching robotics in the classroom. Many different approaches have been attempted, with many that focus on robot competitions. However, following the competition, students often do not know where to turn to keep their curiosity in robotics alive. This paper discusses a collaborative approach that shows students a clear path from early robot competitions through to careers in the field. The approach relies on student participation in real research and a ladder of mentorship through their academic journey.