Girault, Benjamin
Joint Graph and Vertex Importance Learning
Girault, Benjamin, Pavez, Eduardo, Ortega, Antonio
To account for the difficulty associated with singular CGL matrices in inverse covariance estimation, the objective In this paper, we explore the topic of graph learning from the function is oftentimes modified [5, 9-12]. However, such an perspective of the Irregularity-Aware Graph Fourier Transform, approach produces dense graphs, even if variables are weakly with the goal of learning the graph signal space inner correlated (see Sec. 4 and [11]) because the modified objective product to better model data. We propose a novel method to function encourages well connected graphs [9]. This issue learn a graph with smaller edge weight upper bounds compared can be solved by incorporating non-convex sparse regularization to combinatorial Laplacian approaches. Experimentally, [11, 13] at the expense of a more complex graph our approach yields much sparser graphs compared to a learning algorithm.
Generating Labels for Regression of Subjective Constructs using Triplet Embeddings
Mundnich, Karel, Booth, Brandon M., Girault, Benjamin, Narayanan, Shrikanth
Human annotations serve an important role in computational models where the target constructs under study are hidden, such as dimensions of affect. This is especially relevant in machine learning, where subjective labels derived from related observable signals (e.g., audio, video, text) are needed to support model training and testing. Current research trends focus on correcting artifacts and biases introduced by annotators during the annotation process while fusing them into a single annotation. In this work, we propose a novel annotation approach using triplet embeddings. By lifting the absolute annotation process to relative annotations where the annotator compares individual target constructs in triplets, we leverage the accuracy of comparisons over absolute ratings by human annotators. We then build a 1-dimensional embedding in Euclidean space that is indexed in time and serves as a label for regression. In this setting, the annotation fusion occurs naturally as a union of sets of sampled triplet comparisons among different annotators. We show that by using our proposed sampling method to find an embedding, we are able to accurately represent synthetic hidden constructs in time under noisy sampling conditions. We further validate this approach using human annotations collected from Mechanical Turk and show that we can recover the underlying structure of the hidden construct up to bias and scaling factors.