Goto

Collaborating Authors

Supervised Learning


Intrinsic persistent homology via density-based metric learning

arXiv.org Machine Learning

We address the problem of estimating intrinsic distances in a manifold from a finite sample. We prove that the metric space defined by the sample endowed with a computable metric known as sample Fermat distance converges a.s. in the sense of Gromov-Hausdorff. The limiting object is the manifold itself endowed with the population Fermat distance, an intrinsic metric that accounts for both the geometry of the manifold and the density that produces the sample. This result is applied to obtain sample persistence diagrams that converge towards an intrinsic persistence diagram. We show that this method outperforms more standard approaches based on Euclidean norm with theoretical results and computational experiments.


Japan COVID-19 cases set single-day record

The Japan Times

Japan shattered the nationwide record for COVID-19 cases in a day on Wednesday, registering 2,746 cases of the deadly virus as of 6 p.m., public broadcaster NHK reported. The day saw a spate of records in several prefectures, including 245 cases in Aichi, 75 in Kyoto, 72 in Hiroshima, 49 in Gunma and 21 in Oita, and comes amid a recent surge in infections that have prompted concern among regional governments and health authorities. Tokyo, meanwhile, reported 572 new cases -- the second highest daily total ever -- while the number of serious cases dipped by one from a day earlier to 59. The news came a day after the nationwide death toll hit a single-day record of 47 and as serious cases also hit an all-time daily high of 536, according to the health ministry. The capital's daily figure on Wednesday, which was just shy of the record 584 cases recorded last Saturday, was based on 1,428 tests, the Tokyo Metropolitan Government said in a statement.


information retrieval document search using vector space model in R

#artificialintelligence

Now calculate cosine similarity between each document and each query. For each query sort the cosine similarity scores for all the documents and take top-3 documents having high scores.


A PAC-Bayesian Perspective on Structured Prediction with Implicit Loss Embeddings

arXiv.org Machine Learning

Many practical machine learning tasks can be framed as Structured prediction problems, where several output variables are predicted and considered interdependent. Recent theoretical advances in structured prediction have focused on obtaining fast rates convergence guarantees, especially in the Implicit Loss Embedding (ILE) framework. PAC-Bayes has gained interest recently for its capacity of producing tight risk bounds for predictor distributions. This work proposes a novel PAC-Bayes perspective on the ILE Structured prediction framework. We present two generalization bounds, on the risk and excess risk, which yield insights into the behavior of ILE predictors. Two learning algorithms are derived from these bounds.


LCS Graph Kernel Based on Wasserstein Distance in Longest Common Subsequence Metric Space

arXiv.org Machine Learning

For graph classification tasks, many methods use a common strategy to aggregate information of vertex neighbors. Although this strategy provides an efficient means of extracting graph topological features, it brings excessive amounts of information that might greatly reduce its accuracy when dealing with large-scale neighborhoods. Learning graphs using paths or walks will not suffer from this difficulty, but many have low utilization of each path or walk, which might engender information loss and high computational costs. To solve this, we propose a graph kernel using a longest common subsequence (LCS kernel) to compute more comprehensive similarity between paths and walks, which resolves substructure isomorphism difficulties. We also combine it with optimal transport theory to extract more in-depth features of graphs. Furthermore, we propose an LCS metric space and apply an adjacent point merge operation to reduce its computational costs. Finally, we demonstrate that our proposed method outperforms many state-of-the-art graph kernel methods.


Unsupervised Word Translation Pairing using Refinement based Point Set Registration

arXiv.org Artificial Intelligence

Cross-lingual alignment of word embeddings play an important role in knowledge transfer across languages, for improving machine translation and other multi-lingual applications. Current unsupervised approaches rely on similarities in geometric structure of word embedding spaces across languages, to learn structure-preserving linear transformations using adversarial networks and refinement strategies. However, such techniques, in practice, tend to suffer from instability and convergence issues, requiring tedious fine-tuning for precise parameter setting. This paper proposes BioSpere, a novel framework for unsupervised mapping of bi-lingual word embeddings onto a shared vector space, by combining adversarial initialization and refinement procedure with point set registration algorithm used in image processing. We show that our framework alleviates the shortcomings of existing methodologies, and is relatively invariant to variable adversarial learning performance, depicting robustness in terms of parameter choices and training losses. Experimental evaluation on parallel dictionary induction task demonstrates state-of-the-art results for our framework on diverse language pairs.


AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries

arXiv.org Artificial Intelligence

Contrastive learning relies on constructing a collection of negative examples that are sufficiently hard to discriminate against positive queries when their representations are self-trained. Existing contrastive learning methods either maintain a queue of negative samples over minibatches while only a small portion of them are updated in an iteration, or only use the other examples from the current minibatch as negatives. They could not closely track the change of the learned representation over iterations by updating the entire queue as a whole, or discard the useful information from the past minibatches. Alternatively, we present to directly learn a set of negative adversaries playing against the self-trained representation. Two players, the representation network and negative adversaries, are alternately updated to obtain the most challenging negative examples against which the representation of positive queries will be trained to discriminate. We further show that the negative adversaries are updated towards a weighted combination of positive queries by maximizing the adversarial contrastive loss, thereby allowing them to closely track the change of representations over time. Experiment results demonstrate the proposed Adversarial Contrastive (AdCo) model not only achieves superior performances with little computational overhead to the state-of-the-art contrast models, but also can be pretrained more rapidly with fewer epochs.


On InstaHide, Phase Retrieval, and Sparse Matrix Factorization

arXiv.org Machine Learning

In this work, we examine the security of InstaHide, a scheme recently proposed by [Huang, Song, Li and Arora, ICML'20] for preserving the security of private datasets in the context of distributed learning. To generate a synthetic training example to be shared among the distributed learners, InstaHide takes a convex combination of private feature vectors and randomly flips the sign of each entry of the resulting vector with probability 1/2. A salient question is whether this scheme is secure in any provable sense, perhaps under a plausible hardness assumption and assuming the distributions generating the public and private data satisfy certain properties. We show that the answer to this appears to be quite subtle and closely related to the average-case complexity of a new multi-task, missing-data version of the classic problem of phase retrieval. Motivated by this connection, we design a provable algorithm that can recover private vectors using only the public vectors and synthetic vectors generated by InstaHide, under the assumption that the private and public vectors are isotropic Gaussian.


A contribution to Optimal Transport on incomparable spaces

arXiv.org Machine Learning

Optimal Transport is a theory that allows to define geometrical notions of distance between probability distributions and to find correspondences, relationships, between sets of points. Many machine learning applications are derived from this theory, at the frontier between mathematics and optimization. This thesis proposes to study the complex scenario in which the different data belong to incomparable spaces. In particular we address the following questions: how to define and apply Optimal Transport between graphs, between structured data? How can it be adapted when the data are varied and not embedded in the same metric space? This thesis proposes a set of Optimal Transport tools for these different cases. An important part is notably devoted to the study of the Gromov-Wasserstein distance whose properties allow to define interesting transport problems on incomparable spaces. More broadly, we analyze the mathematical properties of the various proposed tools, we establish algorithmic solutions to compute them and we study their applicability in numerous machine learning scenarii which cover, in particular, classification, simplification, partitioning of structured data, as well as heterogeneous domain adaptation.


US coronavirus cases set record, deaths rising -- with crisis central to Trump-Biden election battle

FOX News

New confirmed cases of the coronavirus in the U.S. have climbed to an all-time high of more than 86,000 per day on average, in a glimpse of the worsening crisis that lies ahead for the winner of the presidential election. Cases and hospitalizations are setting records all around the country just as the holidays and winter approach, demonstrating the challenge that either President Donald Trump or former Vice President Joe Biden will face in the coming months. Daily new confirmed coronavirus cases in the U.S. have surged 45% over the past two weeks, to a record 7-day average of 86,352, according to data compiled by Johns Hopkins University. Deaths are also on the rise, up 15 percent to an average of 846 deaths every day. The total U.S. death toll is already more than 232,000, and total confirmed U.S. cases have surpassed 9 million.