A group representing disabled people in Japan has said the doorway width stipulated in an amendment proposed for Tokyo's barrier-free ordinance for hotels is unlikely to be wide enough for many wheelchairs. The amendment, which the Tokyo Metropolitan Government aims to put into effect in September -- less than a year before the 2020 Olympics -- will require new hotels with more than 1,000 square meters of total floor space as well as facilities expanding by 1,000 sq. The metropolitan government set the requirement based on the Japanese Industrial Standards for wheelchairs. The envisioned ordinance also calls for new or renovated hotels of the required size to eliminate steps around roads, parking lots and hotel rooms. But the nonprofit Japan National Assembly Of Disabled Peoples' International said its tests have found that most motorized wheelchairs cannot pass through a bathroom doorway of the stipulated width.
Autonomous cars have to navigate in dynamic environment which can be full of uncertainties. The uncertainties can come either from sensor limitations such as occlusions and limited sensor range, or from probabilistic prediction of other road participants, or from unknown social behavior in a new area. To safely and efficiently drive in the presence of these uncertainties, the decision-making and planning modules of autonomous cars should intelligently utilize all available information and appropriately tackle the uncertainties so that proper driving strategies can be generated. In this paper, we propose a social perception scheme which treats all road participants as distributed sensors in a sensor network. By observing the individual behaviors as well as the group behaviors, uncertainties of the three types can be updated uniformly in a belief space. The updated beliefs from the social perception are then explicitly incorporated into a probabilistic planning framework based on Model Predictive Control (MPC). The cost function of the MPC is learned via inverse reinforcement learning (IRL). Such an integrated probabilistic planning module with socially enhanced perception enables the autonomous vehicles to generate behaviors which are defensive but not overly conservative, and socially compatible. The effectiveness of the proposed framework is verified in simulation on an representative scenario with sensor occlusions.
We describe how we manage cognitive information within our mobile robotics activities. Introduction In previous work (Konolige and Myers 1998) we discussed the requirements for autonomous mobile robot operation in open-ended environments. These environments were loosely characterized as dynamic and human-centric, that is, objects could come and go, and the robots would have to interact with humans to carry out their tasks. For an individual robot, we summarized the most important capabilities as the three C's: coordination, coherence, and communication. These constitute a cognitive basis for a stand alone, autonomous robot. Coordination: A mobile agent must coordinate its activity. At the lowest level there are commands for moving wheels, camera heads, and so on. At the highest level there are goals to achieve: getting to a destination, keeping track of location.
Crowd behavior understanding is crucial yet challenging across a wide range of applications, since crowd behavior is inherently determined by a sequential decision-making process based on various factors, such as the pedestrians' own destinations, interaction with nearby pedestrians and anticipation of upcoming events. In this paper, we propose a novel framework of Social-Aware Generative Adversarial Imitation Learning (SA-GAIL) to mimic the underlying decision-making process of pedestrians in crowds. Specifically, we infer the latent factors of human decision-making process in an unsupervised manner by extending the Generative Adversarial Imitation Learning framework to anticipate future paths of pedestrians. Different factors of human decision making are disentangled with mutual information maximization, with the process modeled by collision avoidance regularization and Social-Aware LSTMs. Experimental results demonstrate the potential of our framework in disentangling the latent decision-making factors of pedestrians and stronger abilities in predicting future trajectories.
Understanding the behaviors and intentions of humans are one of the main challenges autonomous ground vehicles still faced with. More specifically, when it comes to complex environments such as urban traffic scenes, inferring the intentions and actions of vulnerable road users such as pedestrians become even harder. In this paper, we address the problem of intent action prediction of pedestrians in urban traffic environments using only image sequences from a monocular RGB camera. We propose a real-time framework that can accurately detect, track and predict the intended actions of pedestrians based on a tracking-by-detection technique in conjunction with a novel spatio-temporal DenseNet model. We trained and evaluated our framework based on real data collected from urban traffic environments. Our framework has shown resilient and competitive results in comparison to other baseline approaches. Overall, we achieved an average precision score of 84.76% with a real-time performance at 20 FPS.