Goto

Collaborating Authors

 Huang, Yu-Chao


On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality

arXiv.org Machine Learning

We investigate the approximation and estimation rates of conditional diffusion transformers (DiTs) with classifier-free guidance. We present a comprehensive analysis for ``in-context'' conditional DiTs under four common data assumptions. We show that both conditional DiTs and their latent variants lead to the minimax optimality of unconditional DiTs under identified settings. Specifically, we discretize the input domains into infinitesimal grids and then perform a term-by-term Taylor expansion on the conditional diffusion score function under H\"older smooth data assumption. This enables fine-grained use of transformers' universal approximation through a more detailed piecewise constant approximation and hence obtains tighter bounds. Additionally, we extend our analysis to the latent setting under the linear latent subspace assumption. We not only show that latent conditional DiTs achieve lower bounds than conditional DiTs both in approximation and estimation, but also show the minimax optimality of latent unconditional DiTs. Our findings establish statistical limits for conditional and unconditional DiTs, and offer practical guidance toward developing more efficient and accurate DiT models.


Two Tales of Persona in LLMs: A Survey of Role-Playing and Personalization

arXiv.org Artificial Intelligence

The concept of persona, originally adopted in dialogue literature, has re-surged as a promising framework for tailoring large language models (LLMs) to specific context (e.g., personalized search, LLM-as-a-judge). However, the growing research on leveraging persona in LLMs is relatively disorganized and lacks a systematic taxonomy. To close the gap, we present a comprehensive survey to categorize the current state of the field. We identify two lines of research, namely (1) LLM Role-Playing, where personas are assigned to LLMs, and (2) LLM Personalization, where LLMs take care of user personas. Additionally, we introduce existing methods for LLM personality evaluation. To the best of our knowledge, we present the first survey for role-playing and personalization in LLMs under the unified view of persona. We continuously maintain a paper collection to foster future endeavors: https://github.com/MiuLab/PersonaLLM-Survey


BiSHop: Bi-Directional Cellular Learning for Tabular Data with Generalized Sparse Modern Hopfield Model

arXiv.org Machine Learning

The field of developing deep learning architectures for tabular data is recently experiencing rapid advancements [Arik and Pfister, 2021, Gorishniy et al., 2021, Huang et al., 2020, Somepalli et al., 2021]. The primary driving force behind this trend is the limitations of the current dominant methods for tabular data: tree-based methods. Specifically, while tree-based methods excel in tabular learning, tree-based methods lack the capability to integrate with deep learning architectures. Therefore, the pursuit of deep tabular learning is not just a matter of enhancing performance but is also crucial to bridge the existing gap. However, a recent tabular benchmark study [Grinsztajn et al., 2022] reveals that tree-based methods still surpass deep learning models, underscoring two main challenges for deep tabular learning, as highlighted by Grinsztajn et al. [2022, Section 5.3 & 5.4]: (C1) Non-Rotationally Invariant Data Structure: The non-rotationally invariant structure of tabular data weakens the effectiveness of deep learning models that have rotational invariant learning procedures.