Goto

Collaborating Authors

 acceptance rate


Cactus: Accelerating Auto-Regressive Decoding with Constrained Acceptance Speculative Sampling

Hao, Yongchang, Mou, Lili

arXiv.org Machine Learning

Speculative sampling (SpS) has been successful in accelerating the decoding throughput of auto-regressive large language models by leveraging smaller draft models. SpS strictly enforces the generated distribution to match that of the verifier LLM. This is unnecessarily restrictive as slight variations of the verifier's distribution, such as sampling with top-$k$ or temperature, would also be acceptable. Typical acceptance sampling (TAS) alleviates this issue by accepting more tokens using entropy-based heuristics. However, this approach distorts the verifier distribution, potentially degrading output quality when the verifier encodes critical information. In this work, we formalize the speculative sampling algorithm through the lens of constrained optimization. Based on this formulation, we propose Cactus (constrained acceptance speculative sampling), a method that guarantees controlled divergence from the verifier distribution and increasing acceptance rates. Empirical results across a wide range of benchmarks confirm the effectiveness of our approach.






EAI: Emotional Decision-Making of LLMs in Strategic Games and Ethical Dilemmas

Neural Information Processing Systems

We introduce the novel EAI framework for integrating emotion modeling into LLMs to examine the emotional impact on ethics and LLM-based decision-making in various strategic games, including bargaining and repeated games. Our experimental study with various LLMs demonstrated that emotions can significantly alter the ethical decision-making landscape of LLMs, highlighting the need for robust mechanisms to ensure consistent ethical standards. Our game-theoretic analysis revealed that LLMs are susceptible to emotional biases influenced by model size, alignment strategies, and primary pretraining language. Notably, these biases often diverge from typical human emotional responses, occasionally leading to unexpected drops in cooperation rates, even under positive emotional influence.



Appendices

Neural Information Processing Systems

Additionally, to avoid gradients with infinite means even if DL is not contractive, we consider a spectral normalisation, so that instead of computing recursively η0 = ε and ηk = DLηk 1 for k {1,...,N},weset η0 =εand The motivation was to have a quadratic increase for the penalty term if the largest absolute eigenvalue approaches 1, and then smoothly switch to a linear function for values larger than δ2. The suggested approach can perform poorly for non-convex potentials or even convex potentials such as arsing in a logistic regression model for some data sets. The idea now is to run HMC with unit mass matrix for the transformed variables z = f 1(q) where q π. Hessian-vector products can similarly be computed using vector-Jacobian products: With g(z) = grad( U,z), we then compute 2 U(z)w = vjp(g,z,w)> for z = f 1(stop grad(f(zbL/2c)). We also stop all U gradients, i.e.