relative representation
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Austria (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (4 more...)
mini-vec2vec: Scaling Universal Geometry Alignment with Linear Transformations
We build upon vec2vec, a procedure designed to align text embedding spaces without parallel data. vec2vec finds a near-perfect alignment, but it is expensive and unstable. We present mini-vec2vec, a simple and efficient alternative that requires substantially lower computational cost and is highly robust. Moreover, the learned mapping is a linear transformation. Our method consists of three main stages: a tentative matching of pseudo-parallel embedding vectors, transformation fitting, and iterative refinement. Our linear alternative exceeds the original instantiation of vec2vec by orders of magnitude in efficiency, while matching or exceeding their results. The method's stability and interpretable algorithmic steps facilitate scaling and unlock new opportunities for adoption in new domains and fields.
- North America > United States > Gulf of Mexico > Central GOM (0.04)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- North America > Canada (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Singapore (0.04)
- (3 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (12 more...)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Africa > Ethiopia > Addis Ababa > Addis Ababa (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (12 more...)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- (4 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.93)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Austria (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- (4 more...)
Relative Representations of Latent Spaces enable Efficient Semantic Channel Equalization
Hüttebräucker, Tomás, Fiorellino, Simone, Sana, Mohamed, Di Lorenzo, Paolo, Strinati, Emilio Calvanese
In multi-user semantic communication, language mismatche poses a significant challenge when independently trained agents interact. We present a novel semantic equalization algorithm that enables communication between agents with different languages without additional retraining. Our algorithm is based on relative representations, a framework that enables different agents employing different neural network models to have unified representation. It proceeds by projecting the latent vectors of different models into a common space defined relative to a set of data samples called \textit{anchors}, whose number equals the dimension of the resulting space. A communication between different agents translates to a communication of semantic symbols sampled from this relative space. This approach, in addition to aligning the semantic representations of different agents, allows compressing the amount of information being exchanged, by appropriately selecting the number of anchors. Eventually, we introduce a novel anchor selection strategy, which advantageously determines prototypical anchors, capturing the most relevant information for the downstream task. Our numerical results show the effectiveness of the proposed approach allowing seamless communication between agents with radically different models, including differences in terms of neural network architecture and datasets used for initial training.
Relative Representations: Topological and Geometric Perspectives
García-Castellanos, Alejandro, Marchetti, Giovanni Luca, Kragic, Danica, Scolamiero, Martina
Relative representations are an established approach to zero-shot model stitching, consisting of a non-trainable transformation of the latent space of a deep neural network. Based on insights of topological and geometric nature, we propose two improvements to relative representations. First, we introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations. The latter coincides with the symmetries in parameter space induced by common activation functions. Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes. We provide an empirical investigation on a natural language task, where both the proposed variations yield improved performance on zero-shot model stitching.
- Europe > Sweden (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Asia (0.04)
Dynamic Relative Representations for Goal-Oriented Semantic Communications
Fiorellino, Simone, Battiloro, Claudio, Strinati, Emilio Calvanese, Di Lorenzo, Paolo
In future 6G wireless networks, semantic and effectiveness aspects of communications will play a fundamental role, incorporating meaning and relevance into transmissions. However, obstacles arise when devices employ diverse languages, logic, or internal representations, leading to semantic mismatches that might jeopardize understanding. In latent space communication, this challenge manifests as misalignment within high-dimensional representations where deep neural networks encode data. This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches via latent space alignment. We propose a dynamic optimization strategy that adapts relative representations, communication parameters, and computation resources for energy-efficient, low-latency, goal-oriented semantic communications. Numerical results demonstrate our methodology's effectiveness in mitigating mismatches among devices, while optimizing energy consumption, delay, and effectiveness.
- Europe > France > Auvergne-Rhône-Alpes > Isère > Grenoble (0.04)
- North America > United States (0.04)
- Europe > Italy > Lazio > Rome (0.04)