compound noun
Do Vision-Language Models Understand Compound Nouns?
Kumar, Sonal, Ghosh, Sreyan, Sakshi, S, Tyagi, Utkarsh, Manocha, Dinesh
Open-vocabulary vision-language models (VLMs) like CLIP, trained using contrastive loss, have emerged as a promising new paradigm for text-to-image retrieval. However, do VLMs understand compound nouns (CNs) (e.g., lab coat) as well as they understand nouns (e.g., lab)? We curate Compun, a novel benchmark with 400 unique and commonly used CNs, to evaluate the effectiveness of VLMs in interpreting CNs. The Compun benchmark challenges a VLM for text-to-image retrieval where, given a text prompt with a CN, the task is to select the correct image that shows the CN among a pair of distractor images that show the constituent nouns that make up the CN. Next, we perform an in-depth analysis to highlight CLIPs' limited understanding of certain types of CNs. Finally, we present an alternative framework that moves beyond hand-written templates for text prompts widely used by CLIP-like models. We employ a Large Language Model to generate multiple diverse captions that include the CN as an object in the scene described by the caption. Our proposed method improves CN understanding of CLIP by 8.25% on Compun. Code and benchmark are available at: https://github.com/sonalkum/Compun
- Oceania > Australia (0.04)
- North America > United States > Maryland > Prince George's County > College Park (0.04)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.04)
- (2 more...)
- Leisure & Entertainment > Sports (1.00)
- Transportation > Ground > Road (0.46)
LLMs Perform Poorly at Concept Extraction in Cyber-security Research Literature
Würsch, Maxime, Kucharavy, Andrei, David, Dimitri Percia, Mermoud, Alain
Secure and reliable information systems have become a central requirement for the operational continuity of the vast majority of goods and services providers [42]. However, securing information systems in a fast-paced ecosystem of technological changes and innovations is hard [3]. New technologies in cybersecurity have short life cycles and constantly evolve [13]. This exposes information systems to attacks that exploit vulnerabilities and security gaps [3]. Hence, cybersecurity practitioners and researchers need to stay updated on the latest developments and trends to prevent incidents and increase resilience [14]. A common approach to gather cured and synthesized information about such developments is to apply bibliometrics-based knowledge entity extraction and comparison through embedding similarity [10, 50, 61] - recently boosted by the availability of entity extractors based on large language models (LLMs) [17, 46]. However, it is unclear how appropriate this approach is for the cybersecurity literature. We address this by emulating such an entity extraction and comparison pipeline, and by using a variety of common entity extractors - LLM-based and not -, and evaluating how relevant embeddings of extracted entities are to document understanding tasks - namely classification of arXiv documents as relevant to cybersecurity (https://arxiv.org). While LLMs burst into public attention in late 2022 - in large part thanks to public trials of conversationally fine-tuned LLMs [40, 4, 31]-, modern large language models pre-trained on large amounts of data trace their roots back to ELMo LLM, first released in 2018 [45].
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Switzerland > Vaud > Lausanne (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (11 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.95)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Hierarchical Soft Clustering and Automatic Text Summarization for Accessing the Web on Mobile Devices for Visually Impaired People
Dias, Gaël Harry (University of Beira Interior) | Pais, Sebastião (University of Beira Interior) | Cunha, Fernando (University of Beira Interior) | Costa, Hugo (University of Beira Interior) | Machado, David (University of Beira Interior) | Barbosa, Tiago (University of Beira Interior) | Martins, Bruno (University of Beira Interior)
In this paper, we propose a universal solution to web search and web browsing on handheld devices for visually impaired people. For this purpose, we propose (1) to automatically cluster web page results and (2) to summarize all the information in web pages so that speech-to-speech interaction is used efficiently to access information.
- North America > United States > Florida > Monroe County > Key West (0.04)
- Europe > United Kingdom > England > South Yorkshire > Sheffield (0.04)
- Europe > Portugal (0.04)
- (3 more...)