latent construct
Foundation Priors
Foundation models, and in particular large language models, can generate highly informative responses, prompting growing interest in using these ''synthetic'' outputs as data in empirical research and decision-making. This paper introduces the idea of a foundation prior, which shows that model-generated outputs are not as real observations, but draws from the foundation prior induced prior predictive distribution. As such synthetic data reflects both the model's learned patterns and the user's subjective priors, expectations, and biases. We model the subjectivity of the generative process by making explicit the dependence of synthetic outputs on the user's anticipated data distribution, the prompt-engineering process, and the trust placed in the foundation model. We derive the foundation prior as an exponential-tilted, generalized Bayesian update of the user's primitive prior, where a trust parameter governs the weight assigned to synthetic data. We then show how synthetic data and the associated foundation prior can be incorporated into standard statistical and econometric workflows, and discuss their use in applications such as refining complex models, informing latent constructs, guiding experimental design, and augmenting random-coefficient and partially linear specifications. By treating generative outputs as structured, explicitly subjective priors rather than as empirical observations, the framework offers a principled way to harness foundation models in empirical work while avoiding the conflation of synthetic ''facts'' with real data.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- Asia > Middle East > Jordan (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.89)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Tyne and Wear > Sunderland (0.04)
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
The Effects of Flipped Classrooms in Higher Education: A Causal Machine Learning Analysis
Czarnowske, Daniel, Heiss, Florian, Schmitz, Theresa M. A., Stammann, Amrei
This study uses double/debiased machine learning (DML) to evaluate the impact of transitioning from lecture-based blended teaching to a flipped classroom concept. Our findings indicate effects on students' self-conception, procrastination, and enjoyment. We do not find significant positive effects on exam scores, passing rates, or knowledge retention. This can be explained by the insufficient use of the instructional approach that we can identify with uniquely detailed usage data and highlights the need for additional teaching strategies. Methodologically, we propose a powerful DML approach that acknowledges the latent structure inherent in Likert scale variables and, hence, aligns with psychometric principles.
- Europe > Austria > Vienna (0.14)
- Europe > Germany > North Rhine-Westphalia > Düsseldorf Region > Düsseldorf (0.04)
- Europe > Germany > Bavaria > Upper Franconia > Bayreuth (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Education > Educational Setting > Online (1.00)
- Education > Educational Setting > Higher Education (1.00)
- Education > Educational Technology (0.93)
- (2 more...)
Structural Equation-VAE: Disentangled Latent Representations for Tabular Data
Zhang, Ruiyu, Zhao, Ce, Zhao, Xin, Nie, Lin, Lam, Wai-Fung
Learning interpretable latent representations from tabular data remains a challenge in deep generative modeling. We introduce SE-VAE (Structural Equation-Variational Autoencoder), a novel architecture that embeds measurement structure directly into the design of a variational autoencoder. Inspired by structural equation modeling, SE-VAE aligns latent subspaces with known indicator groupings and introduces a global nuisance latent to isolate construct-specific confounding variation. This modular architecture enables disentanglement through design rather than through statistical regularizers alone. We evaluate SE-VAE on a suite of simulated tabular datasets and benchmark its performance against a series of leading baselines using standard disentanglement metrics. SE-VAE consistently outperforms alternatives in factor recovery, interpretability, and robustness to nuisance variation. Ablation results reveal that architectural structure, rather than regularization strength, is the key driver of performance. SE-VAE offers a principled framework for white-box generative modeling in scientific and social domains where latent constructs are theory-driven and measurement validity is essential.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > United Kingdom > England > Tyne and Wear > Sunderland (0.04)
- Research Report > Strength High (1.00)
- Research Report > Experimental Study (1.00)
Assessment and manipulation of latent constructs in pre-trained language models using psychometric scales
Reuben, Maor, Slobodin, Ortal, Elyshar, Aviad, Cohen, Idan-Chaim, Braun-Lewensohn, Orna, Cohen, Odeya, Puzis, Rami
Human-like personality traits have recently been discovered in large language models, raising the hypothesis that their (known and as yet undiscovered) biases conform with human latent psychological constructs. While large conversational models may be tricked into answering psychometric questionnaires, the latent psychological constructs of thousands of simpler transformers, trained for other tasks, cannot be assessed because appropriate psychometric methods are currently lacking. Here, we show how standard psychological questionnaires can be reformulated into natural language inference prompts, and we provide a code library to support the psychometric assessment of arbitrary models. We demonstrate, using a sample of 88 publicly available models, the existence of human-like mental health-related constructs (including anxiety, depression, and Sense of Coherence) which conform with standard theories in human psychology and show similar correlations and mitigation strategies. The ability to interpret and rectify the performance of language models by using psychological tools can boost the development of more explainable, controllable, and trustworthy models.
- North America > United States > Iowa (0.04)
- Asia > Vietnam > Da Nang > Da Nang (0.04)
ChatGPT in Classrooms: Transforming Challenges into Opportunities in Education
Munawar, Harris Bin, Misirlis, Nikolaos
In the era of exponential technology growth, one unexpected guest has claimed a seat in classrooms worldwide, Artificial Intelligence. Generative AI, such as ChatGPT, promises a revolution in education, yet it arrives with a double-edged sword. Its potential for personalized learning is offset by issues of cheating, inaccuracies, and educators struggling to incorporate it effectively into their lesson design. We are standing on the brink of this educational frontier, and it is clear that we need to navigate this terrain with a lot of care. This is a major challenge that could undermine the integrity and value of our educational process. So, how can we turn these challenges into opportunities? When used inappropriately, AI tools can become the perfect tool for the cut copy paste mentality, and quickly begin to corrode critical thinking, creativity, and deep understanding, the most important skills in our rapidly changing world. Teachers feel that they are not equipped to leverage this technology, widening the digital divide among educators and institutions. Addressing these concerns calls for an in depth research approach. We will employ empirical research, drawing on the Technology Acceptance Model, to assess the attitudes toward generative AI among educators and students. Understanding their perceptions, usage patterns, and hurdles is the first crucial step in creating an effective solution. The present study will be used as a process manual for future researchers to apply, running their own data, based on the steps explained here
- Europe > Netherlands (0.04)
- Europe > Greece > Central Macedonia > Thessaloniki (0.04)
- Asia > Middle East > Saudi Arabia (0.04)
- Africa > South Africa (0.04)
- Education > Educational Setting (0.70)
- Education > Educational Technology > Educational Software (0.49)
Evaluating General-Purpose AI with Psychometrics
Wang, Xiting, Jiang, Liming, Hernandez-Orallo, Jose, Stillwell, David, Sun, Luning, Luo, Fang, Xie, Xing
Comprehensive and accurate evaluation of general-purpose AI systems such as large language models allows for effective mitigation of their risks and deepened understanding of their capabilities. Current evaluation methodology, mostly based on benchmarks of specific tasks, falls short of adequately assessing these versatile AI systems, as present techniques lack a scientific foundation for predicting their performance on unforeseen tasks and explaining their varying performance on specific task items or user inputs. Moreover, existing benchmarks of specific tasks raise growing concerns about their reliability and validity. To tackle these challenges, we suggest transitioning from task-oriented evaluation to construct-oriented evaluation. Psychometrics, the science of psychological measurement, provides a rigorous methodology for identifying and measuring the latent constructs that underlie performance across multiple tasks. We discuss its merits, warn against potential pitfalls, and propose a framework to put it into practice. Finally, we explore future opportunities of integrating psychometrics with the evaluation of general-purpose AI systems.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Research Report (1.00)
- Overview (0.88)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Government > Regional Government > North America Government > United States Government (0.46)