Goto

Collaborating Authors

 meta-analysis


A Meta-Analysis of Overfitting in Machine Learning

Neural Information Processing Systems

We conduct the first large meta-analysis of overfitting due to test set reuse in the machine learning community. Our analysis is based on over one hundred machine learning competitions hosted on the Kaggle platform over the course of several years. In each competition, numerous practitioners repeatedly evaluated their progress against a holdout set that forms the basis of a public ranking available throughout the competition. Performance on a separate test set used only once determined the final ranking. By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition.


A Meta-Analysis of LLM Effects on Students across Qualification, Socialisation, and Subjectification

Huang, Jiayu, Wang, Ruoxin Ritter, Liu, Jen-Hao, Xia, Boming, Huang, Yue, Sun, Ruoxi, Xue, Jason Minhui, Zou, Jinan

arXiv.org Artificial Intelligence

Large language models (LLMs) are increasingly positioned as solutions for education, yet evaluations often reduce their impact to narrow performance metrics. This paper reframes the question by asking "what kind of impact should LLMs have in education?" Drawing on Biesta's tripartite account of good education: qualification, socialisation, and subjectification, we present a meta-analysis of 133 experimental and quasi-experimental studies (k = 188). Overall, the impact of LLMs on student learning is positive but uneven. Strong effects emerge in qualification, particularly when LLMs function as tutors in sustained interventions. Socialisation outcomes appear more variable, concentrated in sustained, reflective interventions. Subjectification, linked to autonomy and learner development, remains fragile, with improvements confined to small-scale, long-term studies. This purpose-level view highlights design as the decisive factor: without scaffolds for participation and agency, LLMs privilege what is easiest to measure while neglecting broader aims of education. For HCI and education, the issue is not just whether LLMs work, but what futures they enable or foreclose.


A meta-analysis on the performance of machine-learning based language models for sentiment analysis

Rohde, Elena, Klingwort, Jonas, Borgs, Christian

arXiv.org Artificial Intelligence

Social media is a valuable data source for social science research, particularly in analyzing public sentiment during events with considerable social impact (Wang et al. 2021). However, the large volume of text data makes evaluation challenging. Sentiment analysis, using Natural Language Processing, extracts attitudes and emotions from text to classify content into categories like positive, negative, or neutral (Govin-darajan 2022). Sentiment analysis methods fall into lexicon-based and machine-learning approaches, with the latter preferred for social media due to higher accuracy (Hartmann et al. 2019; V erma and Jain 2022). Machine learning strategies vary by algorithm and feature extraction, making overall performance evaluation challenging. This raises questions about algorithm effectiveness and the factors influencing variability. Identifying study characteristics and potential variability sources is crucial for setting realistic performance expectations (Hartmann et al. 2023). This paper contributes to the literature by conducting a systematic literature review, followed by a meta-analysis and meta-regression, to explain the variation in the performance outcomes of machine learning algorithms in the context of social media data sentiment analysis. The results provide evidence of the factors contributing to the varying performance of different machine-learning algorithms in sentiment analysis.


Transforming Evidence Synthesis: A Systematic Review of the Evolution of Automated Meta-Analysis in the Age of AI

Li, Lingbo, Mathrani, Anuradha, Susnjak, Teo

arXiv.org Artificial Intelligence

Exponential growth in scientific literature has heightened the demand for efficient evidence-based synthesis, driving the rise of the field of Automated Meta-analysis (AMA) powered by natural language processing and machine learning. This PRISMA systematic review introduces a structured framework for assessing the current state of AMA, based on screening 978 papers from 2006 to 2024, and analyzing 54 studies across diverse domains. Findings reveal a predominant focus on automating data processing (57%), such as extraction and statistical modeling, while only 17% address advanced synthesis stages. Just one study (2%) explored preliminary full-process automation, highlighting a critical gap that limits AMA's capacity for comprehensive synthesis. Despite recent breakthroughs in large language models (LLMs) and advanced AI, their integration into statistical modeling and higher-order synthesis, such as heterogeneity assessment and bias evaluation, remains underdeveloped. This has constrained AMA's potential for fully autonomous meta-analysis. From our dataset spanning medical (67%) and non-medical (33%) applications, we found that AMA has exhibited distinct implementation patterns and varying degrees of effectiveness in actually improving efficiency, scalability, and reproducibility. While automation has enhanced specific meta-analytic tasks, achieving seamless, end-to-end automation remains an open challenge. As AI systems advance in reasoning and contextual understanding, addressing these gaps is now imperative. Future efforts must focus on bridging automation across all meta-analysis stages, refining interpretability, and ensuring methodological robustness to fully realize AMA's potential for scalable, domain-agnostic synthesis.


Truth in Text: A Meta-Analysis of ML-Based Cyber Information Influence Detection Approaches

Pittman, Jason M.

arXiv.org Artificial Intelligence

Cyber information influence, or disinformation in general terms, is widely regarded as one of the biggest threats to social progress and government stability. From US presidential elections to European Union referendums and down to regional news reporting of wildfires, lies and post-truths have normalized radical decision-making. Accordingly, there has been an explosion in research seeking to detect disinformation in online media. The frontier of disinformation detection research is leveraging a variety of ML techniques such as traditional ML algorithms like Support Vector Machines, Random Forest, and Na\"ive Bayes. Other research has applied deep learning models including Convolutional Neural Networks, Long Short-Term Memory networks, and transformer-based architectures. Despite the overall success of such techniques, the literature demonstrates inconsistencies when viewed holistically which limits our understanding of the true effectiveness. Accordingly, this work employed a two-stage meta-analysis to (a) demonstrate an overall meta statistic for ML model effectiveness in detecting disinformation and (b) investigate the same by subgroups of ML model types. The study found the majority of the 81 ML detection techniques sampled have greater than an 80\% accuracy with a Mean sample effectiveness of 79.18\% accuracy. Meanwhile, subgroups demonstrated no statistically significant difference between-approaches but revealed high within-group variance. Based on the results, this work recommends future work in replication and development of detection methods operating at the ML model level.


Reviews: A Meta-Analysis of Overfitting in Machine Learning

Neural Information Processing Systems

Update: I thank the authors for the feedback and the additional figure. I would gladly vote for acceptance of the paper. They surveyed a wide range of 112 Kaggle competitions and concluded that there is little evidence of substantial overfitting. I found the question very important to the machine learning community. The investigation is thorough that it covered a broad spectrum of datasets.


Reviews: A Meta-Analysis of Overfitting in Machine Learning

Neural Information Processing Systems

In this paper the authors performed a large-scale empirical study of overfitting due to test set reuse in the machine learning community. The survey includes models from 112 Kaggle competitions and concluded that there is little evidence of substantial overfitting. The reviewers thought that the topic is important, that the paper was well written and easy to follow, and that the experiments are well executed. Since these concerns appeared late in the review process (reviewers were never given a chance to comment on the submissions' similarity), ultimately the PCs reviewed the situation. They determined that the submission was sufficiently different from 5286 and could be accepted.


Empowering Meta-Analysis: Leveraging Large Language Models for Scientific Synthesis

Ahad, Jawad Ibn, Sultan, Rafeed Mohammad, Kaikobad, Abraham, Rahman, Fuad, Amin, Mohammad Ruhul, Mohammed, Nabeel, Rahman, Shafin

arXiv.org Artificial Intelligence

This study investigates the automation of meta-analysis in scientific documents using large language models (LLMs). Meta-analysis is a robust statistical method that synthesizes the findings of multiple studies support articles to provide a comprehensive understanding. We know that a meta-article provides a structured analysis of several articles. However, conducting meta-analysis by hand is labor-intensive, time-consuming, and susceptible to human error, highlighting the need for automated pipelines to streamline the process. Our research introduces a novel approach that fine-tunes the LLM on extensive scientific datasets to address challenges in big data handling and structured data extraction. We automate and optimize the meta-analysis process by integrating Retrieval Augmented Generation (RAG). Tailored through prompt engineering and a new loss metric, Inverse Cosine Distance (ICD), designed for fine-tuning on large contextual datasets, LLMs efficiently generate structured meta-analysis content. Human evaluation then assesses relevance and provides information on model performance in key metrics. This research demonstrates that fine-tuned models outperform non-fine-tuned models, with fine-tuned LLMs generating 87.6% relevant meta-analysis abstracts. The relevance of the context, based on human evaluation, shows a reduction in irrelevancy from 4.56% to 1.9%. These experiments were conducted in a low-resource environment, highlighting the study's contribution to enhancing the efficiency and reliability of meta-analysis automation.


A Meta-Analysis of Overfitting in Machine Learning

Neural Information Processing Systems

We conduct the first large meta-analysis of overfitting due to test set reuse in the machine learning community. Our analysis is based on over one hundred machine learning competitions hosted on the Kaggle platform over the course of several years. In each competition, numerous practitioners repeatedly evaluated their progress against a holdout set that forms the basis of a public ranking available throughout the competition. Performance on a separate test set used only once determined the final ranking. By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition.


(Unfair) Norms in Fairness Research: A Meta-Analysis

Chien, Jennifer, Bergman, A. Stevie, McKee, Kevin R., Tomasev, Nenad, Prabhakaran, Vinodkumar, Qadri, Rida, Marchal, Nahema, Isaac, William

arXiv.org Artificial Intelligence

Algorithmic fairness has emerged as a critical concern in artificial intelligence (AI) research. However, the development of fair AI systems is not an objective process. Fairness is an inherently subjective concept, shaped by the values, experiences, and identities of those involved in research and development. To better understand the norms and values embedded in current fairness research, we conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics, AIES and FAccT, covering a final sample of 139 papers over the period from 2018 to 2022. Our investigation reveals two concerning trends: first, a US-centric perspective dominates throughout fairness research; and second, fairness studies exhibit a widespread reliance on binary codifications of human identity (e.g., "Black/White", "male/female"). These findings highlight how current research often overlooks the complexities of identity and lived experiences, ultimately failing to represent diverse global contexts when defining algorithmic bias and fairness. We discuss the limitations of these research design choices and offer recommendations for fostering more inclusive and representative approaches to fairness in AI systems, urging a paradigm shift that embraces nuanced, global understandings of human identity and values.