Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data

Veličković, Petar, Karazija, Laurynas, Lane, Nicholas D., Bhattacharya, Sourav, Liberis, Edgar, Liò, Pietro, Chieh, Angela, Bellahsen, Otmane, Vegreville, Matthieu

arXiv.org Machine Learning 

We analyse multimodal time-series data corresponding to weight, sleep and steps measurements. We focus on predicting whether a user will successfully achieve his/her weight objective. For this, we design several deep long short-term memory (LSTM) architectures, including a novel cross-modal LSTM (X-LSTM), and demonstrate their superiority over baseline approaches. The X-LSTM improves parameter efficiency by processing each modality separately and allowing for information flow between them by way of recurrent cross-connections. We present a general hyperparameter optimisation technique for X-LSTMs, which allows us to significantly improve on the LSTM and a prior state-of-the-art cross-modal approach, using a comparable number of parameters. Finally, we visualise the model's predictions, revealing implications about latent variables in this task.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found