cafe
'Coffee is just the excuse': the deaf-run cafe where hearing people sign to order
The video menu at Dialogue Cafe teaches hearing people how to order a drink using sign language. The video menu at Dialogue Cafe teaches hearing people how to order a drink using sign language. 'Coffee is just the excuse': the deaf-run cafe where hearing people sign to order W esley Hartwell raised his fists to the barista and shook them next to his ears. He then lowered his fists, extended his thumbs and little fingers, and moved them up and down by his chest, as though milking a cow. Finally, he laid the fingers of one hand flat on his chin and flexed his wrist forward.
- North America > United States (0.14)
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > Scotland (0.05)
- (5 more...)
- Education (0.96)
- Leisure & Entertainment > Sports (0.70)
- Government > Regional Government (0.48)
- Health & Medicine > Therapeutic Area > Otolaryngology (0.35)
CAFE: Catastrophic Data Leakage in Vertical Federated Learning
Recent studies show that private training data can be leaked through the gradients sharing mechanism deployed in distributed machine learning systems, such as federated learning (FL). Increasing batch size to complicate data recovery is often viewed as a promising defense strategy against data leakage. In this paper, we revisit this defense premise and propose an advanced data leakage attack with theoretical justification to efficiently recover batch data from the shared aggregated gradients. We name our proposed method as catastrophic data leakage in vertical federated learning (CAFE). Comparing to existing data leakage attacks, our extensive experimental results on vertical FL settings demonstrate the effectiveness of CAFE to perform large-batch data leakage attack with improved data recovery quality. We also propose a practical countermeasure to mitigate CAFE. Our results suggest that private data participated in standard FL, especially the vertical case, have a high risk of being leaked from the training gradients. Our analysis implies unprecedented and practical data leakage risks in those learning settings. The code of our work is available at https://github.com/DeRafael/CAFE.
- Europe > Austria > Vienna (0.14)
- North America > United States > Colorado > Broomfield County > Broomfield (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- (5 more...)
- Workflow (0.68)
- Research Report > New Finding (0.68)
- Europe > Austria > Vienna (0.14)
- North America > United States > Colorado > Broomfield County > Broomfield (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- (5 more...)
- Workflow (0.68)
- Research Report > New Finding (0.68)
CAFE: Retrieval Head-based Coarse-to-Fine Information Seeking to Enhance Multi-Document QA Capability
Peng, Han, Jiang, Jinhao, Dong, Zican, Zhao, Wayne Xin, Fang, Lei
Advancements in Large Language Models (LLMs) have extended their input context length, yet they still struggle with retrieval and reasoning in long-context inputs. Existing methods propose to utilize the prompt strategy and retrieval head to alleviate this limitation. However, they still face challenges in balancing retrieval precision and recall, impacting their efficacy in answering questions. To address this, we introduce $\textbf{CAFE}$, a two-stage coarse-to-fine method to enhance multi-document question-answering capacities. By gradually eliminating the negative impacts of background and distracting documents, CAFE makes the responses more reliant on the evidence documents. Initially, a coarse-grained filtering method leverages retrieval heads to identify and rank relevant documents. Then, a fine-grained steering method guides attention to the most relevant content. Experiments across benchmarks show CAFE outperforms baselines, achieving up to 22.1% and 13.7% SubEM improvement over SFT and RAG methods on the Mistral model, respectively.
- Europe > Austria > Vienna (0.14)
- North America > United States > Florida > Miami-Dade County > Miami (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- (4 more...)
CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning
Yu, Hao, Zhao, Zhuokai, Yan, Shen, Korycki, Lukasz, Wang, Jianyu, He, Baosheng, Liu, Jiayi, Zhang, Lizhu, Fan, Xiangjun, Yu, Hanchao
The rapid advancement of large vision-language models (LVLMs) has driven significant progress in multimodal tasks, enabling models to interpret, reason, and generate outputs across both visual and textual domains. While excelling in generative tasks, existing LVLMs often face limitations in tasks requiring high-fidelity representation learning, such as generating image or text embeddings for retrieval. Recent work has proposed finetuning LVLMs for representational learning, but the fine-tuned model often loses its generative capabilities due to the representational learning training paradigm. To address this trade-off, we introduce CAFe, a contrastive-autoregressive fine-tuning framework that enhances LVLMs for both representation and generative tasks. By integrating a contrastive objective with autoregressive language modeling, our approach unifies these traditionally separate tasks, achieving state-of-the-art results in both multimodal retrieval and multimodal generative benchmarks, including object hallucination (OH) mitigation. CAFe establishes a novel framework that synergizes embedding and generative functionalities in a single model, setting a foundation for future multimodal models that excel in both retrieval precision and coherent output generation.
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.97)
- Information Technology > Sensing and Signal Processing > Image Processing (0.93)
CAFEs: Cable-driven Collaborative Floating End-Effectors for Agriculture Applications
Cheng, Hung Hon, Hughes, Josie
CAFEs (Collaborative Agricultural Floating End-effectors) is a new robot design and control approach to automating large-scale agricultural tasks. Based upon a cable driven robot architecture, by sharing the same roller-driven cable set with modular robotic arms, a fast-switching clamping mechanism allows each CAFE to clamp onto or release from the moving cables, enabling both independent and synchronized movement across the workspace. The methods developed to enable this system include the mechanical design, precise position control and a dynamic model for the spring-mass liked system, ensuring accurate and stable movement of the robotic arms. The system's scalability is further explored by studying the tension and sag in the cables to maintain performance as more robotic arms are deployed. Experimental and simulation results demonstrate the system's effectiveness in tasks including pick-and-place showing its potential to contribute to agricultural automation.
- North America > United States > Texas > Loving County (0.04)
- Asia > Japan (0.04)
Drift-Aware Federated Learning: A Causal Perspective
Fang, Yunjie, Wu, Sheng, Yang, Tao, Wu, Xiaofeng, Hu, Bo
Federated learning (FL) facilitates collaborative model training among multiple clients while preserving data privacy, often resulting in enhanced performance compared to models trained by individual clients. However, factors such as communication frequency and data distribution can contribute to feature drift, hindering the attainment of optimal training performance. This paper examine the relationship between model update drift and global as well as local optimizer from causal perspective. The influence of the global optimizer on feature drift primarily arises from the participation frequency of certain clients in server updates, whereas the effect of the local optimizer is typically associated with imbalanced data distributions.To mitigate this drift, we propose a novel framework termed Causal drift-Aware Federated lEarning (CAFE). CAFE exploits the causal relationship between feature-invariant components and classification outcomes to independently calibrate local client sample features and classifiers during the training phase. In the inference phase, it eliminated the drifts in the global model that favor frequently communicating clients.Experimental results demonstrate that CAFE's integration of feature calibration, parameter calibration, and historical information effectively reduces both drift towards majority classes and tendencies toward frequently communicating nodes.
- Asia > China (0.14)
- North America > Canada (0.14)
Path-based summary explanations for graph recommenders (extended version)
Karidi, Danae Pla, Pitoura, Evaggelia
Path-based explanations provide intrinsic insights into graph-based recommendation models. However, most previous work has focused on explaining an individual recommendation of an item to a user. In this paper, we propose summary explanations, i.e., explanations that highlight why a user or a group of users receive a set of item recommendations and why an item, or a group of items, is recommended to a set of users as an effective means to provide insights into the collective behavior of the recommender. We also present a novel method to summarize explanations using efficient graph algorithms, specifically the Steiner Tree and the Prize-Collecting Steiner Tree. Our approach reduces the size and complexity of summary explanations while preserving essential information, making explanations more comprehensible for users and more useful to model developers. Evaluations across multiple metrics demonstrate that our summaries outperform baseline explanation methods in most scenarios, in a variety of quality aspects.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Switzerland (0.04)
- Europe > Portugal > Guarda > Guarda (0.04)
- (2 more...)
Communication Compression for Distributed Learning without Control Variates
Ortega, Tomas, Huang, Chun-Yin, Li, Xiaoxiao, Jafarkhani, Hamid
Distributed learning algorithms, such as the ones employed in Federated Learning (FL), require communication compression to reduce the cost of client uploads. The compression methods used in practice are often biased, which require error feedback to achieve convergence when the compression is aggressive. In turn, error feedback requires client-specific control variates, which directly contradicts privacy-preserving principles and requires stateful clients. In this paper, we propose Compressed Aggregate Feedback (CAFe), a novel distributed learning framework that allows highly compressible client updates by exploiting past aggregated updates, and does not require control variates. We consider Distributed Gradient Descent (DGD) as a representative algorithm and provide a theoretical proof of CAFe's superiority to Distributed Compressed Gradient Descent (DCGD) with biased compression in the non-smooth regime with bounded gradient dissimilarity. Experimental results confirm that CAFe consistently outperforms distributed learning with direct compression and highlight the compressibility of the client updates with CAFe.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > Orange County > Irvine (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)