Compositional Concept Generalization with Variational Quantum Circuits
Hawashin, Hala, Abbaszadeh, Mina, Joseph, Nicholas, Pearson, Beth, Lewis, Martha, sadrzadeh, Mehrnoosh
–arXiv.org Artificial Intelligence
Personal use of this material is permitted. Abstract--Compositional generalization is a key facet of human cognition, but lacking in current AI tools such as vision-language models. Previous work examined whether a compositional tensor-based sentence semantics can overcome the challenge, but led to negative results. We conjecture that the increased training efficiency of quantum models will improve performance in these tasks. We interpret the representations of compositional tensor-based models in Hilbert spaces and train V ariational Quantum Circuits to learn these representations on an image captioning task requiring compositional generalization. We used two image encoding techniques: a multi-hot encoding (MHE) on binary image vectors and an angle/amplitude encoding on image vectors taken from the vision-language model CLIP . We achieve good proof-of-concept results using noisy MHE encodings. Performance on CLIP image vectors was more mixed, but still outperformed classical compositional models.
arXiv.org Artificial Intelligence
Sep-12-2025
- Country:
- Asia > China
- Europe
- Italy > Campania
- Naples (0.04)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- United Kingdom > England
- Bristol (0.04)
- Cambridgeshire > Cambridge (0.14)
- Greater London > London (0.05)
- Italy > Campania
- Genre:
- Research Report (0.51)
- Technology: