Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision
Pezzelle, Sandro, Sorodoc, Ionut-Teodor, Bernardi, Raffaella
The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, non-symbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lower-complexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
Apr-13-2018
- Country:
- Europe
- North America > Puerto Rico (0.14)
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Technology:
- Information Technology > Artificial Intelligence
- Cognitive Science (1.00)
- Machine Learning
- Neural Networks (1.00)
- Statistical Learning (0.68)
- Natural Language (1.00)
- Representation & Reasoning (0.68)
- Vision (0.70)
- Information Technology > Artificial Intelligence