Goto

Collaborating Authors

 Chiang, Wen-Hao


Approximately Aligned Decoding

arXiv.org Artificial Intelligence

It is common to reject undesired outputs of Large Language Models (LLMs); however, current methods to do so require an excessive amount of computation, or severely distort the distribution of outputs. We present a method to balance the distortion of the output distribution with computational efficiency, allowing for the generation of long sequences of text with difficult-to-satisfy constraints, with less amplification of low probability outputs compared to existing methods. We show through a series of experiments that the task-specific performance of our method is comparable to methods that do not distort the output distribution, while being much more computationally efficient. Language models sometimes generate undesirable outputs, such as syntactically-incorrect code, hallucinated PII, or profanity. These conditions, which we collectively refer to as errors for the remainder of the paper, can be detected with incremental parsers, regular expression matching, or even simple substring searches. However, once detection occurs, there are several competing methods for mitigating errors in the output. One set of methods, constrained generation (Beurer-Kellner et al., 2024; Geng et al., 2024; Melcer et al., 2024), avoids errors by disabling the generation of any token that immediately leads to such an error. While this method is effective, it can lead to the amplification of low-probability outputs. Another class of methods avoids errors without any amplification of low-probability outputs, at the cost of additional computation. Rejection sampling is the simplest such method; i.e. if the output contains an error, simply generate another sample until the output is acceptable. Adaptive Sampling with Approximate Expected Futures (ASAp) (Park et al., 2024) provides a performance improvement over rejection sampling while maintaining the output distribution by effectively sampling without replacement, but there are still many situations in which it may converge too slowly. In our experiments, we show that our method obtains task-specific performance on par with ASAp, while converging significantly faster when the constraints are difficult to satisfy. We first describe autoregressive language models and their properties.


Drug-drug interaction prediction based on co-medication patterns and graph matching

arXiv.org Machine Learning

Background: The problem of predicting whether a drug combination of arbitrary orders is likely to induce adverse drug reactions is considered in this manuscript. Methods: Novel kernels over drug combinations of arbitrary orders are developed within support vector machines for the prediction. Graph matching methods are used in the novel kernels to measure the similarities among drug combinations, in which drug co-medication patterns are leveraged to measure single drug similarities. Results: The experimental results on a real-world dataset demonstrated that the new kernels achieve an area under the curve (AUC) value 0.912 for the prediction problem. Conclusions: The new methods with drug co-medication based single drug similarities can accurately predict whether a drug combination is likely to induce adverse drug reactions of interest. Keywords: drug-drug interaction prediction; drug combination similarity; co-medication; graph matching