Plotting

Glyce: Glyph-vectors for Chinese Character Representations

Neural Information Processing Systems

It is intuitive that NLP tasks for logographic languages like Chinese should benefit from the use of the glyph information in those languages. However, due to the lack of rich pictographic evidence in glyphs and the weak generalization ability of standard computer vision models on character data, an effective way to utilize the glyph information remains to be found. In this paper, we address this gap by presenting Glyce, the glyph-vectors for Chinese character representations. We make three major innovations: (1) We use historical Chinese scripts (e.g., bronzeware script, seal script, traditional Chinese, etc) to enrich the pictographic evidence in characters; (2) We design CNN structures (called tianzege-CNN) tailored to Chinese character image processing; and (3) We use image-classification as an auxiliary task in a multi-task learning setup to increase the model's ability to generalize. We show that glyph-based models are able to consistently outperform word/char ID-based models in a wide range of Chinese NLP tasks. We are able to set new stateof-the-art results for a variety of Chinese NLP tasks, including tagging (NER, CWS, POS), sentence pair classification, single sentence classification tasks, dependency parsing, and semantic role labeling. For example, the proposed model achieves an F1 score of 80.6 on the OntoNotes dataset of NER, +1.5 over BERT; it achieves an almost perfect accuracy of 99.8% on the Fudan corpus for text classification.


We sincerely thank all the reviewers for their insightful suggestions

Neural Information Processing Systems

We sincerely thank all the reviewers for their insightful suggestions. We will add them back in the updated version, which will have 1 more page. Finally, we relax BERT and fine-tune the two models jointly. Results are shown in Table 1. As can be seen, this auxiliary training objective introduces a +0.8 F1 performance boost.


9b9cfd5428153ccfbd4ba34b7e007305-Paper-Conference.pdf

Neural Information Processing Systems

With advances in the quality of text-to-image (T2I) models has come interest in benchmarking their prompt faithfulness--the semantic coherence of generated images to the prompts they were conditioned on. A variety of T2I faithfulness metrics have been proposed, leveraging advances in cross-modal embeddings and vision-language models (VLMs). However, these metrics are not rigorously compared and benchmarked, instead presented with correlation to human Likert scores over a set of easy-to-discriminate images against seemingly weak baselines.


On the Necessity of Collaboration for Online Model Selection with Decentralized Data Junfan Li1 Zheshun Wu1 Irwin King

Neural Information Processing Systems

We consider online model selection with decentralized data over M clients, and study the necessity of collaboration among clients. Previous work proposed various federated algorithms without demonstrating their necessity, while we answer the question from a novel perspective of computational constraints. We prove lower bounds on the regret, and propose a federated algorithm and analyze the upper bound. Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to o(K), where K is the number of candidate hypothesis spaces. We clarify the unnecessary nature of collaboration in previous federated algorithms for distributed online multi-kernel learning, and improve the regret bounds at a smaller computational and communication cost. Our algorithm relies on three new techniques including an improved Bernstein's inequality for martingale, a federated online mirror descent framework, and decoupling model selection and prediction, which might be of independent interest.


Worst-Case Regret Bounds for Exploration via Randomized Value Functions

Neural Information Processing Systems

This paper studies a recent proposal to use randomized value functions to drive exploration in reinforcement learning. These randomized value functions are generated by injecting random noise into the training data, making the approach compatible with many popular methods for estimating parameterized value functions. By providing a worst-case regret bound for tabular finite-horizon Markov decision processes, we show that planning with respect to these randomized value functions can induce provably efficient exploration.


Reviewer 2: while the principle is certainly common in the literature, this is the first paper to demonstrate frequentist

Neural Information Processing Systems

Thank you to all reviewers! The journal paper Osband et al. [2017] develops a theory of recursive stochastic-dominance I've worked very hard to uncover a new proof that hopefully makes it easy for future This paper tries to provide some backing for the claim that "Training a Responses to Reviewer 1: [Paraphrasing] (1) Would a similar analysis yield a high-probability regret bound?... (2) I now believe the high probability bounds work out. Rather than take an expectation, the Azuma-Hoeffdin'g inequality should bound (( I think setting ฮฒ too large is similar to setting overly large optimism bonuses for optimistic algorithms. One needs to be careful to avoid introducing even more factors of H. You're right that in initial periods, RLSVI is really no better than uniform exploration, though perhaps Some other parts of the state/action space are still poorly understood. Once the algorithm starts actively trying to reach poorly understood states, it's Things are more subtle with neural networks but [Osband et al., 2018] offers one approach.


PC-Fairness: A Unified Framework for Measuring Causality-based Fairness

Neural Information Processing Systems

A recent trend of fair machine learning is to define fairness as causality-based notions which concern the causal connection between protected attributes and decisions. However, one common challenge of all causality-based fairness notions is identifiability, i.e., whether they can be uniquely measured from observational data, which is a critical barrier to applying these notions to real-world situations. In this paper, we develop a framework for measuring different causality-based fairness. We propose a unified definition that covers most of previous causality-based fairness notions, namely the path-specific counterfactual fairness (PC fairness). Based on that, we propose a general method in the form of a constrained optimization problem for bounding the path-specific counterfactual fairness under all unidentifiable situations. Experiments on synthetic and real-world datasets show the correctness and effectiveness of our method.


A Closer Look at the CLS Token for Cross-Domain Few-Shot Learning Shuai Yi2 Yuhua Li1 Ruixuan Li

Neural Information Processing Systems

Vision Transformer (ViT) has shown great power in learning from large-scale datasets. However, collecting sufficient data for expert knowledge is always difficult. To handle this problem, Cross-Domain Few-Shot Learning (CDFSL) has been proposed to transfer the source-domain knowledge learned from sufficient data to target domains where only scarce data is available. In this paper, we find an intriguing phenomenon neglected by previous works for the CDFSL task based on ViT: leaving the CLS token to random initialization, instead of loading sourcedomain trained parameters, could consistently improve target-domain performance.


a new benchmark for evaluating general-purpose NLU systems, which is necessary given the saturation of the GLUE

Neural Information Processing Systems

We thank all the reviewers for their time and comments. Our work builds directly on GLUE and maintains the same general structure. Our benchmark does have a less uniform API than GLUE, but we view this as both a pro and a con. WSC is a coreference task but is designed to require commonsense reasoning to solve. COPA explicitly tests systems' causal reasoning ability (somewhat related to commonsense reasoning).


Invariance and identifiability issues for word embeddings

Neural Information Processing Systems

Word embeddings are commonly obtained as optimizers of a criterion function f of a text corpus, but assessed on word-task performance using a different evaluation function g of the test data. We contend that a possible source of disparity in performance on tasks is the incompatibility between classes of transformations that leave f and g invariant. In particular, word embeddings defined by f are not unique; they are defined only up to a class of transformations to which f is invariant, and this class is larger than the class to which g is invariant. One implication of this is that the apparent superiority of one word embedding over another, as measured by word task performance, may largely be a consequence of the arbitrary elements selected from the respective solution sets. We provide a formal treatment of the above identifiability issue, present some numerical examples, and discuss possible resolutions.