The field of adaptive robotics involves simulations and real-world implementations of robots that adapt to their environments. In this article, I introduce adaptive environmentics -- the flip side of adaptive robotics -- in which the environment adapts to the robot. To illustrate the approach, I offer three simple experiments in which a genetic algorithm is used to shape an environment for a simulated khepera robot. I then discuss at length the potential of adaptive environmentics, also delineating several possible avenues of future research.
The field of adaptive robotics involves simulations and real-world implementations of robots that adapt to their environments. In this article, I introduce adaptive environmentics--the flip side of adaptive robotics--in which the environment adapts to the robot. The reasonable man adapts himself to the world; the unreasonable man persists to adapt the world to himself. Therefore, all progress depends on the unreasonable. The apparent complexity of its behavior over time is largely a reflection of the complexity of the environment in which it finds itself. Using both simulated and real robots, and applying techniques such as reinforcement learning, artificial neural networks, genetic algorithms, and fuzzy logic, researchers have obtained robots that display an amazing slew of behaviors and perform a multitude of tasks, including walking, pushing boxes, navigating, negotiating an obstacle course, playing ball, and foraging (Arkin 1998a). To cite one typical example of an ever-growing many, Yung and Ye (1999) recently wrote: We have presented a fuzzy navigator that performs well in complex and unknown environments, using a rule base that is learned from a simple corridor-like environment. The principle of the navigator is built on the fusion of the obstacle avoidance and goal seeking behaviors aided by an environment evaluator to tune the universe of discourse of the input sensor readings and enhance its adaptability. For this reason, the navigator has been able to learn extremely quickly in a simple environment, and then operate in an unknown environment, where exploration is not required at all. This quote typifies the underlying theme of adaptive robotics: Have a robot adapt to a given environment. Given signifies neither that the environment is known nor that it is static; it means that the robot must adapt to the quirks and idiosyncrasies imposed by the environment--which, for its part, does nothing at all to accommodate the puffing robot. This fundamental principle of adaptive robotics--the environment's unyielding nature--is repealed in this article. Dubbed adaptive environmentics, the basic idea is to create scenarios that are mirror images of those found in adaptive robotics: The environment adapts to a given robot. I hasten to say that in some cases, it is not possible to alter the environment, and in other cases, having the robot adapt is simply the underlying objective. Adaptive robotics has produced many interesting results based on these principles.
The difficult task of searching for victims in devastated areas due to earth quakes or similar catastrophes has not been solved. So many strategies and techniques, using all kind of available resources, has been developed by different rescue teams specially in those parts of the world where natural disasters are often. A possible approach in order to aid in such labor and to avoid human beings from risk, is to use robots capable to go inside of these areas and look for any signal of life. A machine that fulfills this requirements must have a robust hardware and software, to face the most demanding environmental conditions and to achieve search and exploration tasks systematically. This project pursues in a first stage, the development of simple strategies that robots can use to look for victims where the structure of these places is unknown and contains obstacles in non-patterned positions.
The "Michigan" and Pittsburgh" Classifier System structures are both powerful methods by which evolutionary learning and lifetime reinforcement can be combined together in creating entities capable of autonomously acquiring useful rules about a chosen problem domain. Fuzzy Classifier Systems widen the scope of these autonomous rule acquisition structures to continuous valued input and output spaces. In the "Pittsburgh" approach evolutionary techniques operate at the level of whole rule sets (Smith, 1980; Carse, Fogarty & Munro, 1996). By contrast in the "Michigan" approach evolutionary techniques operate at the level of individual rules in a set (Booker, Goldberg & Holland, 1989). A comparative investigation into the characteristics and performance of these techniques in some appropriate shared problem domain is an enlightening and fruitful area for research. The work presented here is part of a Copyright 2001, American Association for Artificial Intelligence (www.aaai.org).
Many tasks require an agent to monitor its environment, but little is known about appropriate monitoring strategies to use in particular situations. Our approach is to learn good monitoring strategies with a genetic programming algorithm. To this end, we have developed a simple agent programming language in which we represent monitoring strategies as programs that control a simulated robot, and a simulator in which the programs can be evaluated. The effect of different environments and tasks is determined experimentally; changing features of the environment will change which strategies are learned. The correspondence can then be analyzed.