Goto

Collaborating Authors

 Garibay, Ivan


Predicting Through Generation: Why Generation Is Better for Prediction

arXiv.org Artificial Intelligence

This paper argues that generating output tokens is more effective than using pooled representations for prediction tasks because token-level generation retains more mutual information. Since LLMs are trained on massive text corpora using next-token prediction, generation aligns naturally with their learned behavior. Using the Data Processing Inequality (DPI), we provide both theoretical and empirical evidence supporting this claim. However, autoregressive models face two key challenges when used for prediction: (1) exposure bias, where the model sees ground truth tokens during training but relies on its own predictions during inference, leading to errors, and (2) format mismatch, where discrete tokens do not always align with the tasks required output structure. To address these challenges, we introduce PredGen(Predicting Through Generating), an end to end framework that (i) uses scheduled sampling to reduce exposure bias, and (ii) introduces a task adapter to convert the generated tokens into structured outputs. Additionally, we introduce Writer-Director Alignment Loss (WDAL), which ensures consistency between token generation and final task predictions, improving both text coherence and numerical accuracy. We evaluate PredGen on multiple classification and regression benchmarks. Our results show that PredGen consistently outperforms standard baselines, demonstrating its effectiveness in structured prediction tasks.


User Profile with Large Language Models: Construction, Updating, and Benchmarking

arXiv.org Artificial Intelligence

User profile modeling plays a key role in personalized systems, as it requires building accurate profiles and updating them with new information. In this paper, we present two high-quality open-source user profile datasets: one for profile construction and another for profile updating. These datasets offer a strong basis for evaluating user profile modeling techniques in dynamic settings. We also show a methodology that uses large language models (LLMs) to tackle both profile construction and updating. Our method uses a probabilistic framework to predict user profiles from input text, allowing for precise and context-aware profile generation. Our experiments demonstrate that models like Mistral-7b and Llama2-7b perform strongly in both tasks. LLMs improve the precision and recall of the generated profiles, and high evaluation scores confirm the effectiveness of our approach.


Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium

arXiv.org Artificial Intelligence

The persistent challenge of bias in machine learning models necessitates robust solutions to ensure parity and equal treatment across diverse groups, particularly in classification tasks. Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness. To address this, we propose a novel methodology grounded in bilevel optimization principles. Our deep learning-based approach concurrently optimizes for both accuracy and fairness objectives, and under certain assumptions, achieving proven Pareto optimal solutions while mitigating bias in the trained model. Theoretical analysis indicates that the upper bound on the loss incurred by this method is less than or equal to the loss of the Lagrangian approach, which involves adding a regularization term to the loss function. We demonstrate the efficacy of our model primarily on tabular datasets such as UCI Adult and Heritage Health. When benchmarked against state-of-the-art fairness methods, our model exhibits superior performance, advancing fairness-aware machine learning solutions and bridging the accuracy-fairness gap. The implementation of FairBiNN is available on https://github.com/yazdanimehdi/FairBiNN.


Agent-Based Modeling of C. Difficile Spread in Hospitals: Assessing Contribution of High-Touch vs. Low-Touch Surfaces and Inoculations' Containment Impact

arXiv.org Artificial Intelligence

Health issues and pandemics remain paramount concerns in the contemporary era. Clostridioides Difficile Infection (CDI) stands out as a critical healthcare-associated infection with global implications. Effectively understanding the mechanisms of infection dissemination within healthcare units and hospitals is imperative to implement targeted containment measures. In this study, we address the limitations of prior research by Sulyok et al., where they delineated two distinct categories of surfaces as high-touch and low-touch fomites, and subsequently evaluated the viral spread contribution of each surface utilizing mathematical modeling and Ordinary Differential Equations (ODE). Acknowledging the indispensable role of spatial features and heterogeneity in the modeling of hospital and healthcare settings, we employ agent-based modeling to capture new insights. By incorporating spatial considerations and heterogeneous patients, we explore the impact of high-touch and low-touch surfaces on contamination transmission between patients. Furthermore, the study encompasses a comprehensive assessment of various cleaning protocols, with differing intervals and detergent cleaning efficacies, in order to identify the most optimal cleaning strategy and the most important factor amidst the array of alternatives. Our results indicate that, among various factors, the frequency of cleaning intervals is the most critical element for controlling the spread of CDI in a hospital environment.


Controlling the Misinformation Diffusion in Social Media by the Effect of Different Classes of Agents

arXiv.org Artificial Intelligence

The rapid and widespread dissemination of misinformation through social networks is a growing concern in today's digital age. This study focused on modeling fake news diffusion, discovering the spreading dynamics, and designing control strategies. A common approach for modeling the misinformation dynamics is SIR-based models. Our approach is an extension of a model called 'SBFC' which is a SIR-based model. This model has three states, Susceptible, Believer, and Fact-Checker. The dynamics and transition between states are based on neighbors' beliefs, hoax credibility, spreading rate, probability of verifying the news, and probability of forgetting the current state. Our contribution is to push this model to real social networks by considering different classes of agents with their characteristics. We proposed two main strategies for confronting misinformation diffusion. First, we can educate a minor class, like scholars or influencers, to improve their ability to verify the news or remember their state longer. The second strategy is adding fact-checker bots to the network to spread the facts and influence their neighbors' states. Our result shows that both of these approaches can effectively control the misinformation spread.


Ethical AI for Social Good

arXiv.org Artificial Intelligence

The concept of AI for Social Good(AI4SG) is gaining momentum in both information societies and the AI community. Through all the advancement of AI-based solutions, it can solve societal issues effectively. To date, however, there is only a rudimentary grasp of what constitutes AI socially beneficial in principle, what constitutes AI4SG in reality, and what are the policies and regulations needed to ensure it. This paper fills the vacuum by addressing the ethical aspects that are critical for future AI4SG efforts. Some of these characteristics are new to AI, while others have greater importance due to its usage.


Interpretable Multi-Head Self-Attention model for Sarcasm Detection in social media

arXiv.org Artificial Intelligence

Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions, make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. Multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text. We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. We visualize the learned attention weights on few sample input texts to showcase the effectiveness and interpretability of our model.


DeepFork: Supervised Prediction of Information Diffusion in GitHub

arXiv.org Artificial Intelligence

Information spreads on complex social networks extremely fast, in other words, a piece of information can go viral within no time. Often it is hard to barricade this diffusion prior to the significant occurrence of chaos, be it a social media or an online coding platform. GitHub is one such trending online focal point for any business to reach their potential contributors and customers, simultaneously. By exploiting such software development paradigm, millions of free software emerged lately in diverse communities. To understand human influence, information spread and evolution of transmitted information among assorted users in GitHub, we developed a deep neural network model: "DeepFork", a supervised machine learning based approach that aims to predict information diffusion in complex social networks; considering node as well as topological features. In our empirical studies, we observed that information diffusion can be detected by link prediction using supervised learning. DeepFork outperforms other machine learning models as it better learns the discriminative patterns from the input features. DeepFork aids in understanding information spread and evolution through a bipartite network of users and repositories i.e., information flow from a user to repository to user.


Supervised Machine Learning based Ensemble Model for Accurate Prediction of Type 2 Diabetes

arXiv.org Machine Learning

According to the American Diabetes Association(ADA), 30.3 million people in the United States have diabetes, but only 7.2 million may be undiagnosed and unaware of their condition. Type 2 diabetes is usually diagnosed for most patients later on in life whereas the less common Type 1 diabetes is diagnosed early on in life. People can live healthy and happy lives while living with diabetes, but early detection produces a better overall outcome on most patient's health. Thus, to test the accurate prediction of Type 2 diabetes, we use the patients' information from an electronic health records company called Practice Fusion, which has about 10,000 patient records from 2009 to 2012. This data contains individual key biometrics, including age, diastolic and systolic blood pressure, gender, height, and weight. We use this data on popular machine learning algorithms and for each algorithm, we evaluate the performance of every model based on their classification accuracy, precision, sensitivity, specificity/recall, negative predictive value, and F1 score. In our study, we find that all algorithms other than Naive Bayes suffered from very low precision. Hence, we take a step further and incorporate all the algorithms into a weighted average or soft voting ensemble model where each algorithm will count towards a majority vote towards the decision outcome of whether a patient has diabetes or not. The accuracy of the Ensemble model on Practice Fusion is 85\%, by far our ensemble approach is new in this space. We firmly believe that the weighted average ensemble model not only performed well in overall metrics but also helped to recover wrong predictions and aid in accurate prediction of Type 2 diabetes. Our accurate novel model can be used as an alert for the patients to seek medical evaluation in time.


Forecasting the Success of Television Series using Machine Learning

arXiv.org Machine Learning

Television is an ever-evolving multi billion dollar industry. The success of a television show in an increasingly technological society is a vast multi-variable formula. The art of success is not just something that happens, but is studied, replicated, and applied. Hollywood can be unpredictable regarding success, as many movies and sitcoms that are hyped up and promise to be a hit end up being box office failures and complete disappointments. In current studies, linguistic exploration is being performed on the relationship between Television series and target community of viewers. Having a decision support system that can display sound and predictable results would be needed to build confidence in the investment of a new TV series. The models presented in this study use data to study and determine what makes a sitcom successful. In this paper, we use descriptive and predictive modeling techniques to assess the continuing success of television comedies: The Office, Big Bang Theory, Arrested Development, Scrubs, and South Park. The factors that are tested for statistical significance on episode ratings are character presence, director, and writer. These statistics show that while characters are indeed crucial to the shows themselves, the creation and direction of the shows pose implication upon the ratings and therefore the success of the shows. We use machine learning based forecasting models to accurately predict the success of shows. The models represent a baseline to understanding the success of a television show and how producers can increase the success of current television shows or utilize this data in the creation of future shows. Due to the many factors that go into a series, the empirical analysis in this work shows that there is no one-fits-all model to forecast the rating or success of a television show.