Shah, Swair
Analysis of Indic Language Capabilities in LLMs
Vaidya, Aatman, Prabhakar, Tarunima, George, Denny, Shah, Swair
This report evaluates the performance of text-in text-out Large Language Models (LLMs) to understand and generate Indic languages. This evaluation is used to identify and prioritize Indic languages suited for inclusion in safety benchmarks. We conduct this study by reviewing existing evaluation studies and datasets; and a set of twenty-eight LLMs that support Indic languages. We analyze the LLMs on the basis of the training data, license for model and data, type of access and model developers. We also compare Indic language performance across evaluation datasets and find that significant performance disparities in performance across Indic languages. Hindi is the most widely represented language in models. While model performance roughly correlates with number of speakers for the top five languages, the assessment after that varies.
DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems
Gupta, Aman, Ravichandran, Anirudh, Zhang, Ziji, Shah, Swair, Beniwal, Anurag, Sadagopan, Narayanan
Task-oriented dialogue systems are essential for applications ranging from customer service to personal assistants and are widely used across various industries. However, developing effective multi-domain systems remains a significant challenge due to the complexity of handling diverse user intents, entity types, and domain-specific knowledge across several domains. In this work, we propose DARD (Domain Assigned Response Delegation), a multi-agent conversational system capable of successfully handling multi-domain dialogs. DARD leverages domain-specific agents, orchestrated by a central dialog manager agent. Our extensive experiments compare and utilize various agent modeling approaches, combining the strengths of smaller fine-tuned models (Flan-T5-large & Mistral-7B) with their larger counterparts, Large Language Models (LLMs) (Claude Sonnet 3.0). We provide insights into the strengths and limitations of each approach, highlighting the benefits of our multi-agent framework in terms of flexibility and composability. We evaluate DARD using the well-established MultiWOZ benchmark, achieving state-of-the-art performance by improving the dialogue inform rate by 6.6% and the success rate by 4.1% over the best-performing existing approaches. Additionally, we discuss various annotator discrepancies and issues within the MultiWOZ dataset and its evaluation system.
Solving Generalized Column Subset Selection With Heuristic Search
Shah, Swair (The University of Texas at Dallas) | He, Baokun (The University of Texas at Dallas) | Xu, Ke (The University of Texas at Dallas) | Maung, Crystal (The University of Texas at Dallas) | Schweitzer, Haim (The University of Texas at Dallas)
We address the problem of approximating a matrix by the linear combination of a column sparse matrix and a low rank matrix. Two variants of a heuristic search algorithm are described. The first produces an optimal solution but may be slow, as these problems are believed to be NP-hard. The second is much faster, but only guarantees a suboptimal solution. The quality of the approximation and the optimality criterion can be specified in terms of unitarily invariant norms.
Cleaning the Null Space: A Privacy Mechanism for Predictors
Xu, Ke (The University of Texas at Dallas) | Cao, Tongyi ( University of Massachusetts Amherst ) | Shah, Swair (The University of Texas at Dallas) | Maung, Crystal (The University of Texas at Dallas) | Schweitzer, Haim (The University of Texas at Dallas)
In standard machine learning and regression setting feature values are used to predict some desired information. The privacy challenge considered here is to prevent an adversary from using available feature values to predict confidential information that one wishes to keep secret. We show that this can sometimes be achieved with almost no effect on the qual- ity of predicting desired information. We describe two algorithms aimed at providing such privacy when the predictors have a linear operator in the first stage. The desired effect can be achieved by zeroing out feature components in the approximate null space of the linear operator.
Enhancing the Privacy of Predictors
Xu, Ke (The University of Texas at Dallas) | Shah, Swair (The University of Texas at Dallas) | Cao, Tongyi (University of Massachusetts Amherst) | Maung, Crystal (The University of Texas at Dallas) | Schweitzer, Haim (The University of Texas at Dallas)
The privacy challenge considered here is to prevent an adversary from using available feature values to predict confi- dential information. We propose an algorithm providing such privacy for predictors that have a linear operator in the first stage. Privacy is achieved by zeroing out feature components in the approximate null space of the linear operator. We show that this has little effect on predicting desired information.