A group representing disabled people in Japan has said the doorway width stipulated in an amendment proposed for Tokyo's barrier-free ordinance for hotels is unlikely to be wide enough for many wheelchairs. The amendment, which the Tokyo Metropolitan Government aims to put into effect in September -- less than a year before the 2020 Olympics -- will require new hotels with more than 1,000 square meters of total floor space as well as facilities expanding by 1,000 sq. The metropolitan government set the requirement based on the Japanese Industrial Standards for wheelchairs. The envisioned ordinance also calls for new or renovated hotels of the required size to eliminate steps around roads, parking lots and hotel rooms. But the nonprofit Japan National Assembly Of Disabled Peoples' International said its tests have found that most motorized wheelchairs cannot pass through a bathroom doorway of the stipulated width.
We describe how we manage cognitive information within our mobile robotics activities. Introduction In previous work (Konolige and Myers 1998) we discussed the requirements for autonomous mobile robot operation in open-ended environments. These environments were loosely characterized as dynamic and human-centric, that is, objects could come and go, and the robots would have to interact with humans to carry out their tasks. For an individual robot, we summarized the most important capabilities as the three C's: coordination, coherence, and communication. These constitute a cognitive basis for a stand alone, autonomous robot. Coordination: A mobile agent must coordinate its activity. At the lowest level there are commands for moving wheels, camera heads, and so on. At the highest level there are goals to achieve: getting to a destination, keeping track of location.
Autonomous cars have to navigate in dynamic environment which can be full of uncertainties. The uncertainties can come either from sensor limitations such as occlusions and limited sensor range, or from probabilistic prediction of other road participants, or from unknown social behavior in a new area. To safely and efficiently drive in the presence of these uncertainties, the decision-making and planning modules of autonomous cars should intelligently utilize all available information and appropriately tackle the uncertainties so that proper driving strategies can be generated. In this paper, we propose a social perception scheme which treats all road participants as distributed sensors in a sensor network. By observing the individual behaviors as well as the group behaviors, uncertainties of the three types can be updated uniformly in a belief space. The updated beliefs from the social perception are then explicitly incorporated into a probabilistic planning framework based on Model Predictive Control (MPC). The cost function of the MPC is learned via inverse reinforcement learning (IRL). Such an integrated probabilistic planning module with socially enhanced perception enables the autonomous vehicles to generate behaviors which are defensive but not overly conservative, and socially compatible. The effectiveness of the proposed framework is verified in simulation on an representative scenario with sensor occlusions.
Crowd behavior understanding is crucial yet challenging across a wide range of applications, since crowd behavior is inherently determined by a sequential decision-making process based on various factors, such as the pedestrians' own destinations, interaction with nearby pedestrians and anticipation of upcoming events. In this paper, we propose a novel framework of Social-Aware Generative Adversarial Imitation Learning (SA-GAIL) to mimic the underlying decision-making process of pedestrians in crowds. Specifically, we infer the latent factors of human decision-making process in an unsupervised manner by extending the Generative Adversarial Imitation Learning framework to anticipate future paths of pedestrians. Different factors of human decision making are disentangled with mutual information maximization, with the process modeled by collision avoidance regularization and Social-Aware LSTMs. Experimental results demonstrate the potential of our framework in disentangling the latent decision-making factors of pedestrians and stronger abilities in predicting future trajectories.
In densely populated environments, socially compliant navigation is critical for autonomous robots as driving close to people is unavoidable. This manner of social navigation is challenging given the constraints of human comfort and social rules. Traditional methods based on hand-craft cost functions to achieve this task have difficulties to operate in the complex real world. Other learning-based approaches fail to address the naturalness aspect from the perspective of collective formation behaviors. We present an autonomous navigation system capable of operating in dense crowds and utilizing information of social groups. The underlying system incorporates a deep neural network to track social groups and join the flow of a social group in facilitating the navigation. A collision avoidance layer in the system further ensures navigation safety. In experiments, our method generates socially compliant behaviors as state-of-the-art methods. More importantly, the system is capable of navigating safely in a densely populated area (10+ people in a 10m x 20m area) following crowd flows to reach the goal.