Goto

Collaborating Authors

 self-regulation


Unveiling Gamer Archetypes through Multi modal feature Correlations and Unsupervised Learning

Kanwal, Moona, Siddiqui, Muhammad Sami, Ali, Syed Anael

arXiv.org Artificial Intelligence

Profiling gamers provides critical insights for adaptive game design, behavioral understanding, and digital well-being. This study proposes an integrated, data-driven framework that combines psychological measures, behavioral analytics, and machine learning to reveal underlying gamer personas. A structured survey of 250 participants, including 113 active gamers, captured multidimensional behavioral, motivational, and social data. The analysis pipeline integrated feature engineering, association-network, knowledge-graph analysis, and unsupervised clustering to extract meaningful patterns. Correlation statistics uses Cramers V, Tschuprows T, Theils U, and Spearmans quantified feature associations, and network centrality guided feature selection. Dimensionality-reduction techniques such as PCA, SVD, t-SNE are coupled with clustering algorithms like K-Means, Agglomerative, Spectral, DBSCAN, evaluated using Silhouette, Calinski Harabasz, and Davies Bouldin indices. The PCA with K-Means with k = 4 model achieved optimal cluster quality with Silhouette = 0.4, identifying four archetypes as Immersive Social Story-Seekers, Disciplined Optimizers, Strategic Systems Navigators, and Competitive Team-Builders. This research contributes a reproducible pipeline that links correlation-driven network insights with unsupervised learning. The integration of behavioral correlation networks with clustering not only enhances classification accuracy but also offers a holistic lens to connect gameplay motivations with psychological and wellness outcomes.


The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs

Han, Pengrui, Kocielnik, Rafal, Song, Peiyang, Debnath, Ramit, Mobbs, Dean, Anandkumar, Anima, Alvarez, R. Michael

arXiv.org Machine Learning

Personality traits have long been studied as predictors of human behavior.Recent advances in Large Language Models (LLMs) suggest similar patterns may emerge in artificial systems, with advanced LLMs displaying consistent behavioral tendencies resembling human traits like agreeableness and self-regulation. Understanding these patterns is crucial, yet prior work primarily relied on simplified self-reports and heuristic prompting, with little behavioral validation. In this study, we systematically characterize LLM personality across three dimensions: (1) the dynamic emergence and evolution of trait profiles throughout training stages; (2) the predictive validity of self-reported traits in behavioral tasks; and (3) the impact of targeted interventions, such as persona injection, on both self-reports and behavior. Our findings reveal that instructional alignment (e.g., RLHF, instruction tuning) significantly stabilizes trait expression and strengthens trait correlations in ways that mirror human data. However, these self-reported traits do not reliably predict behavior, and observed associations often diverge from human patterns. While persona injection successfully steers self-reports in the intended direction, it exerts little or inconsistent effect on actual behavior. By distinguishing surface-level trait expression from behavioral consistency, our findings challenge assumptions about LLM personality and underscore the need for deeper evaluation in alignment and interpretability.


Designing for Self-Regulation in Informal Programming Learning: Insights from a Storytelling-Centric Approach

Alghamdi, Sami Saeed, Bull, Christopher, Kharrufa, Ahmed

arXiv.org Artificial Intelligence

--Many people learn programming independently from online resources and often report struggles in achieving their personal learning goals. Learners frequently describe their experiences as isolating and frustrating, challenged by abundant uncertainties, information overload, and distraction, compounded by limited guidance. At the same time, social media serves as a personal space where many engage in diverse self-regulation practices, including help-seeking, using external memory aids (e.g., self-notes), self-reflection, emotion regulation, and self-motivation. For instance, learners often mark achievements and set milestones through their posts. In response, we developed a system consisting of a web platform and browser extensions to support self-regulation online. The design aims to add learner-defined structure to otherwise unstructured experiences and bring meaning to curation and reflection activities by translating them into learning stories with AI-generated feedback. We position storytelling as an integrative approach to design that connects resource curation, reflective and sensemaking practice, and narrative practices learners already use across social platforms. We recruited 15 informal programming learners who are regular social media users to engage with the system in a self-paced manner; participation concluded upon submitting a learning story and survey. We used three quantitative scales and a qualitative survey to examine users' characteristics and perceptions of the system's support for their self-regulation. User feedback suggests the system's viability as a self-regulation aid. Learners particularly valued in-situ reflection, automated story feedback, and video annotation, while other features received mixed views. We highlight perceived benefits, friction points, and design opportunities for future AI-augmented self-regulation tools. Many people interested in programming take a self-directed approach to learning, drawing on a wide range of informal online resources ( e.g., [1]-[4]). According to a 2024 Stack Overflow survey, programming learners engage more frequently with open-ended, nonlinear materials such as forums, tutorials, technical documentation, and social media platforms (e.g., Y ouTube, Twitch, and X) than with textbooks or structured e-learning courses (i.e., MOOCs) [5].


Exploring AI Writers: Technology, Impact, and Future Prospects

Huang, Zhiqian

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) writers have emerged as a signi ficant force in the realm of content creation. These advanced tools leverage natural language processing techniques to g enerate coherent and logical texts, applicable across vari ous domains such as journalism, advertising, and educational m aterials. This document delves into the capabilities, applications, and implications of AI writers, examining thei r technological underpinnings, market influence, strength s, limitations, future trajectories, and ethical considerat ions. In the rapidly evolving landscape of artificial intelligenc e technologies today, AI models are increasingly being appl ied across various domains, with literary creation being no exc eption.


Video game firms found to have broken own UK industry rules on loot boxes

The Guardian

The UK government's decision to let technology companies self-regulate gambling-style loot boxes in video games has been called into question, after some of the developers put in charge of new industry guidelines broke their own rules. In the past six months, the advertising regulator has upheld complaints against three companies involved in drawing up industry rules, including the leading developer Electronic Arts (EA), for failing to disclose that their games contained loot boxes. An expert who submitted the complaints said he had found hundreds more examples of breaches but had only taken a handful to the Advertising Standards Authority (ASA) in order to highlight the problem. Loot boxes are in-game features that allow players to pay, with real money or virtual currency, to open a digital envelope containing random prizes, such as an outfit or a weapon for a character. Despite warnings from experts that loot boxes carry similar risks to gambling, the then Department for Digital, Culture, Media and Sport said in July 2022 it would not follow other countries, such as Belgium, in choosing to regulate them as gambling products.


Process mining for self-regulated learning assessment in e-learning

Cerezo, R., Bogarin, A., Esteban, M., Romero, C.

arXiv.org Artificial Intelligence

Content assessment has broadly improved in e-learning scenarios in recent decades. However, the eLearning process can give rise to a spatial and temporal gap that poses interesting challenges for assessment of not only content, but also students' acquisition of core skills such as self-regulated learning. Our objective was to discover students' self-regulated learning processes during an eLearning course by using Process Mining Techniques. We applied a new algorithm in the educational domain called Inductive Miner over the interaction traces from 101 university students in a course given over one semester on the Moodle 2.0 platform. Data was extracted from the platform's event logs with 21629 traces in order to discover students' self-regulation models that contribute to improving the instructional process. The Inductive Miner algorithm discovered optimal models in terms of fitness for both Pass and Fail students in this dataset, as well as models at a certain level of granularity that can be interpreted in educational terms, which are the most important achievement in model discovery. We can conclude that although students who passed did not follow the instructors' suggestions exactly, they did follow the logic of a successful self-regulated learning process as opposed to their failing classmates. The Process Mining models also allow us to examine which specific actions the students performed, and it was particularly interesting to see a high presence of actions related to forum-supported collaborative learning in the Pass group and an absence of those in the Fail group.


Towards Dialogue Systems with Agency in Human-AI Collaboration Tasks

Sharma, Ashish, Rao, Sudha, Brockett, Chris, Malhotra, Akanksha, Jojic, Nebojsa, Dolan, Bill

arXiv.org Artificial Intelligence

Agency, the capacity to proactively shape events, is crucial to how humans interact and collaborate with other humans. In this paper, we investigate Agency as a potentially desirable function of dialogue agents, and how it can be measured and controlled. We build upon the social-cognitive theory of Bandura (2001) to develop a framework of features through which Agency is expressed in dialogue -- indicating what you intend to do (Intentionality), motivating your intentions (Motivation), having self-belief in intentions (Self-Efficacy), and being able to self-adjust (Self-Regulation). We collect and release a new dataset of 83 human-human collaborative interior design conversations containing 908 conversational snippets annotated for Agency features. Using this dataset, we explore methods for measuring and controlling Agency in dialogue systems. Automatic and human evaluation show that although a baseline GPT-3 model can express Intentionality, models that explicitly manifest features associated with high Motivation, Self-Efficacy, and Self-Regulation are better perceived as being highly agentive. This work has implications for the development of dialogue systems with varying degrees of Agency in collaborative tasks.


Congress and AI - The New Stack

#artificialintelligence

Congress has never been the quickest off the mark when it comes to making laws dealing with technology. Now, even as AI takes over creative writing and art, Congress continues to sit idle. As legislators endeavor to comprehend generative AI programs such as Microsoft Bing, ChatGPT and Google Bard, some of the more technology-oriented lawmakers are apprehensive about a repeat of Congress's unpreparedness in responding to the previous major tech wave -- social media. Worries, however, don't appear to be leading to action. True, there's a backlash now for letting tech companies keep Washington at arm's length with promises of "self-regulation" on critical issues such as privacy protection, child safety, disinformation, cryptocurrency, and data portability.


High-tech legislation through self-regulation - Information Age

#artificialintelligence

A quick glance over our technological, scientific, and productive history over the past few decades shows a trend towards increasing specialisation. Getting into an area and becoming a true expert in it takes considerably more time than it did several decades or centuries ago. Business, while progressing slower towards the same trend, is still experiencing something similar. Explaining in-depth technical concepts with sufficient detail and nuance to a layman is becoming more troublesome. Machine learning is one such example – frequently used, but scarcely understood by people outside the technical world.


Two schools of thoughts for Responsible AI: which one you subscribe ?

#artificialintelligence

There are two schools of thoughts on how AI could be more responsible. Let us understand both and see what do I recommend. The Regulatory School of thought is one of the largest political economics schools. Its origins date back to the early 1970s in France, when the economy was a wreck and there was a great deal of economic instability. Its founder, Destanne de Bernis, coined the term regulation, and his goal was to use the concept as a systems theory to update Marx's economics.