Garibay, Ozlem Ozmen
BnTTS: Few-Shot Speaker Adaptation in Low-Resource Setting
Basher, Mohammad Jahid Ibna, Kowsher, Md, Islam, Md Saiful, Nandi, Rabindra Nath, Prottasha, Nusrat Jahan, Menon, Mehadi Hasan, Muntasir, Tareq Al, Chowdhury, Shammur Absar, Alam, Firoj, Yousefi, Niloofar, Garibay, Ozlem Ozmen
This paper introduces BnTTS (Bangla Text-To-Speech), the first framework for Bangla speaker adaptation-based TTS, designed to bridge the gap in Bangla speech synthesis using minimal training data. Building upon the XTTS architecture, our approach integrates Bangla into a multilingual TTS pipeline, with modifications to account for the phonetic and linguistic characteristics of the language. We pre-train BnTTS on 3.85k hours of Bangla speech dataset with corresponding text labels and evaluate performance in both zero-shot and few-shot settings on our proposed test dataset. Empirical evaluations in few-shot settings show that BnTTS significantly improves the naturalness, intelligibility, and speaker fidelity of synthesized Bangla speech. Compared to state-of-the-art Bangla TTS systems, BnTTS exhibits superior performance in Subjective Mean Opinion Score (SMOS), Naturalness, and Clarity metrics.
BoKDiff: Best-of-K Diffusion Alignment for Target-Specific 3D Molecule Generation
Yalabadi, Ali Khodabandeh, Yazdani-Jahromi, Mehdi, Garibay, Ozlem Ozmen
Structure-based drug design (SBDD) leverages the 3D structure of biomolecular targets to guide the creation of new therapeutic agents. Recent advances in generative models, including diffusion models and geometric deep learning, have demonstrated promise in optimizing ligand generation. However, the scarcity of high-quality protein-ligand complex data and the inherent challenges in aligning generated ligands with target proteins limit the effectiveness of these methods. We propose BoKDiff, a novel framework that enhances ligand generation by combining multi-objective optimization and Best-of-K alignment methodologies. Built upon the DecompDiff model, BoKDiff generates diverse candidates and ranks them using a weighted evaluation of molecular properties such as QED, SA, and docking scores. To address alignment challenges, we introduce a method that relocates the center of mass of generated ligands to their docking poses, enabling accurate sub-component extraction. Additionally, we integrate a Best-of-N (BoN) sampling approach, which selects the optimal ligand from multiple generated candidates without requiring fine-tuning. BoN achieves exceptional results, with QED values exceeding 0.6, SA scores above 0.75, and a success rate surpassing 35%, demonstrating its efficiency and practicality. BoKDiff achieves state-of-the-art results on the CrossDocked2020 dataset, including a -8.58 average Vina docking score and a 26% success rate in molecule generation. This study is the first to apply Best-of-K alignment and Best-of-N sampling to SBDD, highlighting their potential to bridge generative modeling with practical drug discovery requirements. The code is provided at https://github.com/khodabandeh-ali/BoKDiff.git.
Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium
Yazdani-Jahromi, Mehdi, Yalabadi, Ali Khodabandeh, Rajabi, AmirArsalan, Tayebi, Aida, Garibay, Ivan, Garibay, Ozlem Ozmen
The persistent challenge of bias in machine learning models necessitates robust solutions to ensure parity and equal treatment across diverse groups, particularly in classification tasks. Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness. To address this, we propose a novel methodology grounded in bilevel optimization principles. Our deep learning-based approach concurrently optimizes for both accuracy and fairness objectives, and under certain assumptions, achieving proven Pareto optimal solutions while mitigating bias in the trained model. Theoretical analysis indicates that the upper bound on the loss incurred by this method is less than or equal to the loss of the Lagrangian approach, which involves adding a regularization term to the loss function. We demonstrate the efficacy of our model primarily on tabular datasets such as UCI Adult and Heritage Health. When benchmarked against state-of-the-art fairness methods, our model exhibits superior performance, advancing fairness-aware machine learning solutions and bridging the accuracy-fairness gap. The implementation of FairBiNN is available on https://github.com/yazdanimehdi/FairBiNN.
LLM-Mixer: Multiscale Mixing in LLMs for Time Series Forecasting
Kowsher, Md, Sobuj, Md. Shohanur Islam, Prottasha, Nusrat Jahan, Alanis, E. Alejandro, Garibay, Ozlem Ozmen, Yousefi, Niloofar
Time series forecasting remains a challenging task, particularly in the context of complex multiscale temporal patterns. This study presents LLM-Mixer, a framework that improves forecasting accuracy through the combination of multiscale time-series decomposition with pre-trained LLMs (Large Language Models). LLM-Mixer captures both short-term fluctuations and long-term trends by decomposing the data into multiple temporal resolutions and processing them with a frozen LLM, guided by a textual prompt specifically designed for time-series data. Extensive experiments conducted on multivariate and univariate datasets demonstrate that LLM-Mixer achieves competitive performance, outperforming recent state-of-the-art models across various forecasting horizons. This work highlights the potential of combining multiscale analysis and LLMs for effective and scalable time-series forecasting.
Parameter-Efficient Fine-Tuning of Large Language Models using Semantic Knowledge Tuning
Prottasha, Nusrat Jahan, Mahmud, Asif, Sobuj, Md. Shohanur Islam, Bhat, Prakash, Kowsher, Md, Yousefi, Niloofar, Garibay, Ozlem Ozmen
Large Language Models (LLMs) are gaining significant popularity in recent years for specialized tasks using prompts due to their low computational cost. Standard methods like prefix tuning utilize special, modifiable tokens that lack semantic meaning and require extensive training for best performance, often falling short. In this context, we propose a novel method called Semantic Knowledge Tuning (SK-Tuning) for prompt and prefix tuning that employs meaningful words instead of random tokens. This method involves using a fixed LLM to understand and process the semantic content of the prompt through zero-shot capabilities. Following this, it integrates the processed prompt with the input text to improve the model's performance on particular tasks. Our experimental results show that SK-Tuning exhibits faster training times, fewer parameters, and superior performance on tasks such as text classification and understanding compared to other tuning methods. This approach offers a promising method for optimizing the efficiency and effectiveness of LLMs in processing language tasks.
Agent-Based Modeling of C. Difficile Spread in Hospitals: Assessing Contribution of High-Touch vs. Low-Touch Surfaces and Inoculations' Containment Impact
Abdidizaji, Sina, Yalabadi, Ali Khodabandeh, Yazdani-Jahromi, Mehdi, Garibay, Ozlem Ozmen, Garibay, Ivan
Health issues and pandemics remain paramount concerns in the contemporary era. Clostridioides Difficile Infection (CDI) stands out as a critical healthcare-associated infection with global implications. Effectively understanding the mechanisms of infection dissemination within healthcare units and hospitals is imperative to implement targeted containment measures. In this study, we address the limitations of prior research by Sulyok et al., where they delineated two distinct categories of surfaces as high-touch and low-touch fomites, and subsequently evaluated the viral spread contribution of each surface utilizing mathematical modeling and Ordinary Differential Equations (ODE). Acknowledging the indispensable role of spatial features and heterogeneity in the modeling of hospital and healthcare settings, we employ agent-based modeling to capture new insights. By incorporating spatial considerations and heterogeneous patients, we explore the impact of high-touch and low-touch surfaces on contamination transmission between patients. Furthermore, the study encompasses a comprehensive assessment of various cleaning protocols, with differing intervals and detergent cleaning efficacies, in order to identify the most optimal cleaning strategy and the most important factor amidst the array of alternatives. Our results indicate that, among various factors, the frequency of cleaning intervals is the most critical element for controlling the spread of CDI in a hospital environment.
Controlling the Misinformation Diffusion in Social Media by the Effect of Different Classes of Agents
Yalabadi, Ali Khodabandeh, Yazdani-Jahromi, Mehdi, Abdidizaji, Sina, Garibay, Ivan, Garibay, Ozlem Ozmen
The rapid and widespread dissemination of misinformation through social networks is a growing concern in today's digital age. This study focused on modeling fake news diffusion, discovering the spreading dynamics, and designing control strategies. A common approach for modeling the misinformation dynamics is SIR-based models. Our approach is an extension of a model called 'SBFC' which is a SIR-based model. This model has three states, Susceptible, Believer, and Fact-Checker. The dynamics and transition between states are based on neighbors' beliefs, hoax credibility, spreading rate, probability of verifying the news, and probability of forgetting the current state. Our contribution is to push this model to real social networks by considering different classes of agents with their characteristics. We proposed two main strategies for confronting misinformation diffusion. First, we can educate a minor class, like scholars or influencers, to improve their ability to verify the news or remember their state longer. The second strategy is adding fact-checker bots to the network to spread the facts and influence their neighbors' states. Our result shows that both of these approaches can effectively control the misinformation spread.
FragXsiteDTI: Revealing Responsible Segments in Drug-Target Interaction with Transformer-Driven Interpretation
Yalabadi, Ali Khodabandeh, Yazdani-Jahromi, Mehdi, Yousefi, Niloofar, Tayebi, Aida, Abdidizaji, Sina, Garibay, Ozlem Ozmen
Drug-Target Interaction (DTI) prediction is vital for drug discovery, yet challenges persist in achieving model interpretability and optimizing performance. We propose a novel transformer-based model, FragXsiteDTI, that aims to address these challenges in DTI prediction. Notably, FragXsiteDTI is the first DTI model to simultaneously leverage drug molecule fragments and protein pockets. Our information-rich representations for both proteins and drugs offer a detailed perspective on their interaction. Inspired by the Perceiver IO framework, our model features a learnable latent array, initially interacting with protein binding site embeddings using cross-attention and later refined through self-attention and used as a query to the drug fragments in the drug's cross-attention transformer block. This learnable query array serves as a mediator and enables seamless information translation, preserving critical nuances in drug-protein interactions. Our computational results on three benchmarking datasets demonstrate the superior predictive power of our model over several state-of-the-art models. We also show the interpretability of our model in terms of the critical components of both target proteins and drug molecules within drug-target pairs.
Distraction is All You Need for Fairness
Yazdani-Jahromi, Mehdi, Rajabi, AmirArsalan, Yalabadi, Ali Khodabandeh, Tayebi, Aida, Garibay, Ozlem Ozmen
Bias in training datasets must be managed for various groups in classification tasks to ensure parity or equal treatment. With the recent growth in artificial intelligence models and their expanding role in automated decision-making, ensuring that these models are not biased is vital. There is an abundance of evidence suggesting that these models could contain or even amplify the bias present in the data on which they are trained, inherent to their objective function and learning algorithms; Many researchers direct their attention to this issue in different directions, namely, changing data to be statistically independent, adversarial training for restricting the capabilities of a particular competitor who aims to maximize parity, etc. These methods result in information loss and do not provide a suitable balance between accuracy and fairness or do not ensure limiting the biases in training. To this end, we propose a powerful strategy for training deep learning models called the Distraction module, which can be theoretically proven effective in controlling bias from affecting the classification results. This method can be utilized with different data types (e.g., Tabular, images, graphs, etc.). We demonstrate the potency of the proposed method by testing it on UCI Adult and Heritage Health datasets (tabular), POKEC-Z, POKEC-N and NBA datasets (graph), and CelebA dataset (vision). Using state-of-the-art methods proposed in the fairness literature for each dataset, we exhibit our model is superior to these proposed methods in minimizing bias and maintaining accuracy.
TabFairGAN: Fair Tabular Data Generation with Generative Adversarial Networks
Rajabi, Amirarsalan, Garibay, Ozlem Ozmen
With the increasing reliance on automated decision making, the issue of algorithmic fairness has gained increasing importance. In this paper, we propose a Generative Adversarial Network for tabular data generation. The model includes two phases of training. In the first phase, the model is trained to accurately generate synthetic data similar to the reference dataset. In the second phase we modify the value function to add fairness constraint, and continue training the network to generate data that is both accurate and fair. We test our results in both cases of unconstrained, and constrained fair data generation. In the unconstrained case, i.e. when the model is only trained in the first phase and is only meant to generate accurate data following the same joint probability distribution of the real data, the results show that the model beats state-of-the-art GANs proposed in the literature to produce synthetic tabular data. Also, in the constrained case in which the first phase of training is followed by the second phase, we train the network and test it on four datasets studied in the fairness literature and compare our results with another state-of-the-art pre-processing method, and present the promising results that it achieves. Comparing to other studies utilizing GANs for fair data generation, our model is comparably more stable by using only one critic, and also by avoiding major problems of original GAN model, such as mode-dropping and non-convergence, by implementing a Wasserstein GAN.