Waffling around for Performance: Visual Classification with Random Words and Broad Concepts
Roth, Karsten, Kim, Jae Myung, Koepke, A. Sophia, Vinyals, Oriol, Schmid, Cordelia, Akata, Zeynep
–arXiv.org Artificial Intelligence
The visual classification performance of vision-language models such as CLIP has been shown to benefit from additional semantic knowledge from large language models (LLMs) such as GPT-3. In particular, averaging over LLM-generated class descriptors, e.g. "waffle, which has a round shape", can notably improve generalization performance. In this work, we critically study this behavior and propose WaffleCLIP, a framework for zero-shot visual classification which simply replaces LLM-generated descriptors with random character and word descriptors. Without querying external models, we achieve comparable performance gains on a large number of visual classification tasks. This allows WaffleCLIP to both serve as a low-cost alternative, as well as a sanity check for any future LLM-based vision-language model extensions. We conduct an extensive experimental study on the impact and shortcomings of additional semantics introduced with LLM-generated descriptors, and showcase how - if available - semantic context is better leveraged by querying LLMs for high-level concepts, which we show can be done to jointly resolve potential class name ambiguities. Code is available here: https://github.com/ExplainableML/WaffleCLIP.
arXiv.org Artificial Intelligence
Aug-16-2023
- Genre:
- Research Report > New Finding (0.88)
- Industry:
- Technology: