Active Online Learning Architecture for Multimodal Sensor-based ADL Recognition
Oishi, Nobuyuki (The University of Electro-Communications) | Numao, Masayuki (The University of Electro-Communications)
Long-term observation of changes in Activities of Daily Living (ADL) is important for assisting older people to stay active longer by preventing aging-associated diseases such as disuse syndrome. Previous studies have proposed a number of ways to detect the state of a person using a single type of sensor data. However, for recognizing more complicated state, properly integrating multiple sensor data is essential, but the technology remains a challenge. In addition, previous methods lack abilities to deal with misclassified data unknown at the training phase. In this paper, we propose an architecture for multimodal sensor-based ADL recognition which spontaneously acquires knowledge from data of unknown label type. Evaluation experiments are conducted to test the architecture's abilities to recognize ADL and construct data-driven reactive planning by integrating three types of dataflows, acquire new concepts, and expand existing concepts semi-autonomously and in real time. By adding extension plugins to Fluentd, we expended its functions and developed an extended model, Fluentd++. The results of the evaluation experiments indicate that the architecture is able to achieve the above required functions satisfactorily.
Mar-21-2018